Compare commits
64 Commits
Author | SHA1 | Date | |
---|---|---|---|
8fdbe4d334
|
|||
406a1485da
|
|||
6dc66b1c27
|
|||
f2b1a1dd3b
|
|||
cb166a399d
|
|||
7108dd0111
|
|||
2105754911
|
|||
f3ba4cbfd3
|
|||
e5f7085324
|
|||
578481324b
|
|||
bf8ac9850d
|
|||
ab408b6412
|
|||
4aa0a6f234
|
|||
f9646e3386
|
|||
3b612b960e
|
|||
b0ad4bead0
|
|||
4b2e573715
|
|||
12e6916ab2
|
|||
1e76f63725
|
|||
7aa5ad2eb9
|
|||
fe331bb544
|
|||
03ac85a967
|
|||
96589ff659
|
|||
e5b5809764
|
|||
1407418c60 | |||
383b51c35d
|
|||
5b9123d128 | |||
2b1a1c21e4
|
|||
ea080ca1c7
|
|||
028fe58561
|
|||
b6a41872d5
|
|||
731dabd74d
|
|||
7950f19c2d
|
|||
b27e504bbd
|
|||
33106ba772
|
|||
3ae0783d00
|
|||
e8201865eb
|
|||
5f534a60cc
|
|||
ce90f6774b
|
|||
48e41cbd2c
|
|||
603c5607f7
|
|||
bb72e903d1
|
|||
65e1948835
|
|||
310977e9b8
|
|||
b10586bf55
|
|||
1741807be4
|
|||
9a4548def4
|
|||
b4e5f53726
|
|||
f7559b2c4f
|
|||
9a7a3ff2c7
|
|||
c4805806be
|
|||
161bcf42ad
|
|||
8e6ead416c
|
|||
e6defc7948
|
|||
33d5da6a4f
|
|||
1110372a55
|
|||
e6a00d6b8f
|
|||
57cd746e5c
|
|||
878e16286b
|
|||
4726ccfb8c
|
|||
598dad1e6d
|
|||
01c0d7e49b
|
|||
a170a3ce01
|
|||
9bb8fc50fe
|
2
.flake8
2
.flake8
@@ -1,3 +1,3 @@
|
||||
[flake8]
|
||||
ignore = W191, E501, W503
|
||||
ignore = W191, E501, W503, E203
|
||||
max-line-length = 120
|
||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@@ -143,3 +143,5 @@ dmypy.json
|
||||
cython_debug/
|
||||
|
||||
*.csv
|
||||
|
||||
local_scripts/
|
||||
|
117
CHANGELOG.md
117
CHANGELOG.md
@@ -2,6 +2,123 @@
|
||||
|
||||
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
|
||||
|
||||
## [1.0.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.1...1.0.0) (2024-05-01)
|
||||
|
||||
|
||||
### ⚠ BREAKING CHANGES
|
||||
|
||||
* allows new seed spec instead of cli arg, removes old cli arg
|
||||
|
||||
### Features
|
||||
|
||||
* adds additional file slug parsing ([2105754](https://gitea.deepak.science:2222/physics/deepdog/commit/2105754911c89bde9dcbea9866462225604a3524))
|
||||
* Adds more powerful direct mc runs to sub for old real spectrum run ([f2b1a1d](https://gitea.deepak.science:2222/physics/deepdog/commit/f2b1a1dd3b3436e37d84f7843b9b2a202be4b51c))
|
||||
* allows new seed spec instead of cli arg, removes old cli arg ([7108dd0](https://gitea.deepak.science:2222/physics/deepdog/commit/7108dd0111c7dfd6ec204df1d0058530cd3dcab9))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* no longer throws error for overlapping keys, the warning should hopefully be enough? ([f3ba4cb](https://gitea.deepak.science:2222/physics/deepdog/commit/f3ba4cbfd36a9f08cdc4d8774a7f745f8c98bac3))
|
||||
|
||||
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
|
||||
|
||||
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
|
||||
|
||||
## [0.8.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.10...0.8.0) (2024-04-28)
|
||||
|
||||
|
||||
### ⚠ BREAKING CHANGES
|
||||
|
||||
* fixes the spin qubit frequency phase shift calculation which had an index problem
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* fixes the spin qubit frequency phase shift calculation which had an index problem ([f9646e3](https://gitea.deepak.science:2222/physics/deepdog/commit/f9646e33868e1a0da8ab663230c0c692ac25bb74))
|
||||
|
||||
### [0.7.10](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.9...0.7.10) (2024-04-28)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* adds cli probs ([4b2e573](https://gitea.deepak.science:2222/physics/deepdog/commit/4b2e57371546731137b011461849bb849d4d4e0f))
|
||||
* better management of cli wrapper ([b0ad4be](https://gitea.deepak.science:2222/physics/deepdog/commit/b0ad4bead0d4762eb7f848f6e557f6d9b61200b9))
|
||||
|
||||
### [0.7.9](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.8...0.7.9) (2024-04-21)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* adds ability to write custom dmc filters ([ea080ca](https://gitea.deepak.science:2222/physics/deepdog/commit/ea080ca1c7068042ce1e0a222d317f785a6b05f4))
|
||||
* adds tarucha phase calculation, using spin qubit precession rate noise ([3ae0783](https://gitea.deepak.science:2222/physics/deepdog/commit/3ae0783d00cbe6a76439c1d671f2cff621d8d0a8))
|
||||
|
||||
### [0.7.8](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.7...0.7.8) (2024-02-29)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* uses correct measurements ([5f534a6](https://gitea.deepak.science:2222/physics/deepdog/commit/5f534a60cc7c4838fcacee11a7e58b97d34e154a))
|
||||
|
||||
### [0.7.7](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.6...0.7.7) (2024-02-29)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* fixes phase calculation issue with setting input array ([48e41cb](https://gitea.deepak.science:2222/physics/deepdog/commit/48e41cbd2c58d4c4d2747822d618d7d55257643d))
|
||||
|
||||
### [0.7.6](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.5...0.7.6) (2024-02-28)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* adds ability to use phase measurements only for correlations ([bb72e90](https://gitea.deepak.science:2222/physics/deepdog/commit/bb72e903d14704a3783daf2dbc1797b90880aa85))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* fixes typeerror vs indexerror on bare float as cost in subset simulation ([65e1948](https://gitea.deepak.science:2222/physics/deepdog/commit/65e19488359d7f5656660da7da8f32ed474989c3))
|
||||
|
||||
### [0.7.5](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.4...0.7.5) (2023-12-09)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* adds direct monte carlo package ([1741807](https://gitea.deepak.science:2222/physics/deepdog/commit/1741807be43d08fb51bc94518dd3b67585c04c20))
|
||||
* adds longchain logging if logging last generation ([b4e5f53](https://gitea.deepak.science:2222/physics/deepdog/commit/b4e5f5372682fc64c3734a96c4a899e018f127ce))
|
||||
* allows disabling timestamp in subset simulation bayes results ([9a4548d](https://gitea.deepak.science:2222/physics/deepdog/commit/9a4548def45a01f1f518135d4237c3dc09dcc342))
|
||||
|
||||
### [0.7.4](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.3...0.7.4) (2023-07-27)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* adds configurable chunk size for the initial mc level 0 SS stage cost calculation to reduce memory usage ([9a7a3ff](https://gitea.deepak.science:2222/physics/deepdog/commit/9a7a3ff2c7ebe81d5e10647ce39844c372ff7b07))
|
||||
* allows for deepdog bayesrun with ss to not print csv to make snapshot testing possible ([8e6ead4](https://gitea.deepak.science:2222/physics/deepdog/commit/8e6ead416c9eba56f568f648d0df44caaa510cfe))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* fixes bug if case of clamping necessary ([161bcf4](https://gitea.deepak.science:2222/physics/deepdog/commit/161bcf42addf331661c3929073688b9f2c13502c))
|
||||
* fixes bug with clamped probabilities being underestimated ([e6defc7](https://gitea.deepak.science:2222/physics/deepdog/commit/e6defc794871a48ac331023eb477bd235b78d6d0))
|
||||
|
||||
### [0.7.3](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.2...0.7.3) (2023-07-27)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* adds utility options and avoids memory leak ([598dad1](https://gitea.deepak.science:2222/physics/deepdog/commit/598dad1e6dc8fc0b7a5b4a90c8e17bf744e8d98c))
|
||||
|
||||
### [0.7.2](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.1...0.7.2) (2023-07-24)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* clamps results now ([9bb8fc5](https://gitea.deepak.science:2222/physics/deepdog/commit/9bb8fc50fe1bd1a285a333c5a396bfb6ac3176cf))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* fixes clamping format etc. ([a170a3c](https://gitea.deepak.science:2222/physics/deepdog/commit/a170a3ce01adcec356e5aaab9abcc0ec4accd64b))
|
||||
|
||||
### [0.7.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.0...0.7.1) (2023-07-24)
|
||||
|
||||
|
||||
|
11
README.md
11
README.md
@@ -5,7 +5,7 @@
|
||||
[](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
The DiPole DiaGnostic tool.
|
||||
|
||||
@@ -13,6 +13,13 @@ The DiPole DiaGnostic tool.
|
||||
|
||||
`poetry install` to start locally
|
||||
|
||||
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `doo release`.
|
||||
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `just release`.
|
||||
|
||||
In general `just --list` has some of the useful stuff for figuring out what development tools there are.
|
||||
|
||||
Poetry as an installer is good, even better is using Nix (maybe with direnv to automatically pick up the `devShell` from `flake.nix`).
|
||||
In either case `just` should handle actually calling things in a way that's agnostic to poetry as a runner or through nix.
|
||||
|
||||
### local scripts
|
||||
`local_scripts` folder allows for scripts to be run using this code, but that probably isn't the most auditable for actual usage.
|
||||
The API is still only something I'm using so there's no guarantees yet that it will be stable; overall semantic versioning should help with API breaks.
|
||||
|
@@ -20,6 +20,8 @@ CHUNKSIZE = 50
|
||||
DotInput = Tuple[numpy.typing.ArrayLike, float]
|
||||
|
||||
|
||||
CLAMPING_FACTOR = 10
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -68,6 +70,10 @@ class BayesRunWithSubspaceSimulation:
|
||||
ss_default_r_step=0.01,
|
||||
ss_default_w_log_step=0.01,
|
||||
ss_default_upper_w_log_step=4,
|
||||
ss_dump_last_generation=False,
|
||||
ss_initial_costs_chunk_size=100,
|
||||
write_output_to_bayesruncsv=True,
|
||||
use_timestamp_for_output=True,
|
||||
) -> None:
|
||||
self.dot_inputs = pdme.inputs.inputs_with_frequency_range(
|
||||
dot_positions, frequency_range
|
||||
@@ -105,8 +111,11 @@ class BayesRunWithSubspaceSimulation:
|
||||
|
||||
self.probabilities = [1 / self.model_count] * self.model_count
|
||||
|
||||
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||
self.filename = f"{timestamp}-{filename_slug}.bayesrunwithss.csv"
|
||||
if use_timestamp_for_output:
|
||||
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||
self.filename = f"{timestamp}-{filename_slug}.bayesrunwithss.csv"
|
||||
else:
|
||||
self.filename = f"{filename_slug}.bayesrunwithss.csv"
|
||||
self.max_frequency = max_frequency
|
||||
|
||||
if end_threshold is not None:
|
||||
@@ -131,13 +140,22 @@ class BayesRunWithSubspaceSimulation:
|
||||
self.ss_default_r_step = ss_default_r_step
|
||||
self.ss_default_w_log_step = ss_default_w_log_step
|
||||
self.ss_default_upper_w_log_step = ss_default_upper_w_log_step
|
||||
|
||||
self.ss_dump_last_generation = ss_dump_last_generation
|
||||
self.ss_initial_costs_chunk_size = ss_initial_costs_chunk_size
|
||||
self.run_count = run_count
|
||||
|
||||
def go(self) -> None:
|
||||
with open(self.filename, "a", newline="") as outfile:
|
||||
writer = csv.DictWriter(outfile, fieldnames=self.csv_fields, dialect="unix")
|
||||
writer.writeheader()
|
||||
self.write_output_to_csv = write_output_to_bayesruncsv
|
||||
|
||||
def go(self) -> Sequence:
|
||||
|
||||
if self.write_output_to_csv:
|
||||
with open(self.filename, "a", newline="") as outfile:
|
||||
writer = csv.DictWriter(
|
||||
outfile, fieldnames=self.csv_fields, dialect="unix"
|
||||
)
|
||||
writer.writeheader()
|
||||
|
||||
return_result = []
|
||||
|
||||
for run in range(1, self.run_count + 1):
|
||||
|
||||
@@ -170,6 +188,9 @@ class BayesRunWithSubspaceSimulation:
|
||||
self.ss_default_r_step,
|
||||
self.ss_default_w_log_step,
|
||||
self.ss_default_upper_w_log_step,
|
||||
initial_cost_chunk_size=self.ss_initial_costs_chunk_size,
|
||||
keep_probs_list=False,
|
||||
dump_last_generation_to_file=self.ss_dump_last_generation,
|
||||
)
|
||||
results.append(subset_run.execute())
|
||||
|
||||
@@ -193,10 +214,18 @@ class BayesRunWithSubspaceSimulation:
|
||||
|
||||
for (name, result) in zip(self.model_names, results):
|
||||
if result.over_target_likelihood is None:
|
||||
_logger.error("got a none result, bye")
|
||||
return
|
||||
likelihoods.append(result.over_target_likelihood)
|
||||
row[f"{name}_likelihood"] = result.over_target_likelihood
|
||||
if result.lowest_likelihood is None:
|
||||
_logger.error(f"result {result} looks bad")
|
||||
clamped_likelihood = 10**-15
|
||||
else:
|
||||
clamped_likelihood = result.lowest_likelihood / CLAMPING_FACTOR
|
||||
_logger.warning(
|
||||
f"got a none result, clamping to {clamped_likelihood}"
|
||||
)
|
||||
else:
|
||||
clamped_likelihood = result.over_target_likelihood
|
||||
likelihoods.append(clamped_likelihood)
|
||||
row[f"{name}_likelihood"] = clamped_likelihood
|
||||
|
||||
success_weight = sum(
|
||||
[
|
||||
@@ -212,12 +241,14 @@ class BayesRunWithSubspaceSimulation:
|
||||
for name, probability in zip(self.model_names, self.probabilities):
|
||||
row[f"{name}_prob"] = probability
|
||||
_logger.info(row)
|
||||
return_result.append(row)
|
||||
|
||||
with open(self.filename, "a", newline="") as outfile:
|
||||
writer = csv.DictWriter(
|
||||
outfile, fieldnames=self.csv_fields, dialect="unix"
|
||||
)
|
||||
writer.writerow(row)
|
||||
if self.write_output_to_csv:
|
||||
with open(self.filename, "a", newline="") as outfile:
|
||||
writer = csv.DictWriter(
|
||||
outfile, fieldnames=self.csv_fields, dialect="unix"
|
||||
)
|
||||
writer.writerow(row)
|
||||
|
||||
if self.use_end_threshold:
|
||||
max_prob = max(self.probabilities)
|
||||
@@ -226,3 +257,5 @@ class BayesRunWithSubspaceSimulation:
|
||||
f"Aborting early, because {max_prob} is greater than {self.end_threshold}"
|
||||
)
|
||||
break
|
||||
|
||||
return return_result
|
||||
|
0
deepdog/cli/__init__.py
Normal file
0
deepdog/cli/__init__.py
Normal file
5
deepdog/cli/probs/__init__.py
Normal file
5
deepdog/cli/probs/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
from deepdog.cli.probs.main import wrapped_main
|
||||
|
||||
__all__ = [
|
||||
"wrapped_main",
|
||||
]
|
51
deepdog/cli/probs/args.py
Normal file
51
deepdog/cli/probs/args.py
Normal file
@@ -0,0 +1,51 @@
|
||||
import argparse
|
||||
import os
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
def dir_path(path):
|
||||
if os.path.isdir(path):
|
||||
return path
|
||||
else:
|
||||
raise argparse.ArgumentTypeError(f"readable_dir:{path} is not a valid path")
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
"probs", description="Calculating probability from finished bayesrun"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--log_file",
|
||||
type=str,
|
||||
help="A filename for logging to, if not provided will only log to stderr",
|
||||
default=None,
|
||||
)
|
||||
parser.add_argument(
|
||||
"--bayesrun-directory",
|
||||
"-d",
|
||||
type=dir_path,
|
||||
help="The directory to search for bayesrun files, defaulting to cwd if not passed",
|
||||
default=".",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--indexify-json",
|
||||
help="A json file with the indexify config for parsing job indexes. Will skip if not present",
|
||||
default="",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--coalesced-keys",
|
||||
type=str,
|
||||
help="A comma separated list of strings over which to coalesce data. By default coalesce over all fields within model names, ignore file level names",
|
||||
default="",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--uncoalesced-outfile",
|
||||
type=str,
|
||||
help="output filename for uncoalesced data. If not provided, will not be written",
|
||||
default=None,
|
||||
)
|
||||
parser.add_argument(
|
||||
"--coalesced-outfile",
|
||||
type=str,
|
||||
help="output filename for coalesced data. If not provided, will not be written",
|
||||
default=None,
|
||||
)
|
||||
return parser.parse_args()
|
178
deepdog/cli/probs/dicts.py
Normal file
178
deepdog/cli/probs/dicts.py
Normal file
@@ -0,0 +1,178 @@
|
||||
import typing
|
||||
from deepdog.results import BayesrunOutput
|
||||
import logging
|
||||
import csv
|
||||
import tqdm
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def build_model_dict(
|
||||
bayes_outputs: typing.Sequence[BayesrunOutput],
|
||||
) -> typing.Dict[
|
||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||
]:
|
||||
"""
|
||||
Maybe someday do something smarter with the coalescing and stuff but don't want to so i won't
|
||||
"""
|
||||
# assume that everything is well formatted and the keys are the same across entire list and initialise list of keys.
|
||||
# model dict will contain a model_key: {calculation_dict} where each calculation_dict represents a single calculation for that model,
|
||||
# the uncoalesced version, keyed by the specific file keys
|
||||
model_dict: typing.Dict[
|
||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||
] = {}
|
||||
|
||||
_logger.info("building model dict")
|
||||
for out in tqdm.tqdm(bayes_outputs, desc="reading outputs", leave=False):
|
||||
for model_result in out.results:
|
||||
model_key = tuple(v for v in model_result.parsed_model_keys.values())
|
||||
if model_key not in model_dict:
|
||||
model_dict[model_key] = {}
|
||||
calculation_dict = model_dict[model_key]
|
||||
calculation_key = tuple(v for v in out.data.values())
|
||||
if calculation_key not in calculation_dict:
|
||||
calculation_dict[calculation_key] = {
|
||||
"_model_key_dict": model_result.parsed_model_keys,
|
||||
"_calculation_key_dict": out.data,
|
||||
"success": model_result.success,
|
||||
"count": model_result.count,
|
||||
}
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got {calculation_key} twice for model_key {model_key}"
|
||||
)
|
||||
|
||||
return model_dict
|
||||
|
||||
|
||||
def write_uncoalesced_dict(
|
||||
uncoalesced_output_filename: typing.Optional[str],
|
||||
uncoalesced_model_dict: typing.Dict[
|
||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||
],
|
||||
):
|
||||
if uncoalesced_output_filename is None or uncoalesced_output_filename == "":
|
||||
_logger.warning("Not provided a uncoalesced filename, not going to try")
|
||||
return
|
||||
|
||||
first_value = next(iter(next(iter(uncoalesced_model_dict.values())).values()))
|
||||
model_field_names = set(first_value["_model_key_dict"].keys())
|
||||
calculation_field_names = set(first_value["_calculation_key_dict"].keys())
|
||||
if not (set(model_field_names).isdisjoint(calculation_field_names)):
|
||||
_logger.info(f"Detected model field names {model_field_names}")
|
||||
_logger.info(f"Detected calculation field names {calculation_field_names}")
|
||||
_logger.warning(
|
||||
f"model field names {model_field_names} and calculation {calculation_field_names} have an overlap, which is possibly a problem"
|
||||
)
|
||||
collected_fieldnames = list(model_field_names)
|
||||
collected_fieldnames.extend(calculation_field_names)
|
||||
collected_fieldnames.extend(["success", "count"])
|
||||
_logger.info(f"Full uncoalesced fieldnames are {collected_fieldnames}")
|
||||
with open(uncoalesced_output_filename, "w", newline="") as uncoalesced_output_file:
|
||||
writer = csv.DictWriter(
|
||||
uncoalesced_output_file, fieldnames=collected_fieldnames
|
||||
)
|
||||
writer.writeheader()
|
||||
|
||||
for model_dict in uncoalesced_model_dict.values():
|
||||
for calculation in model_dict.values():
|
||||
row = calculation["_model_key_dict"].copy()
|
||||
row.update(calculation["_calculation_key_dict"].copy())
|
||||
row.update(
|
||||
{
|
||||
"success": calculation["success"],
|
||||
"count": calculation["count"],
|
||||
}
|
||||
)
|
||||
writer.writerow(row)
|
||||
|
||||
|
||||
def coalesced_dict(
|
||||
uncoalesced_model_dict: typing.Dict[
|
||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||
],
|
||||
minimum_count: float = 0.1,
|
||||
):
|
||||
"""
|
||||
pass in uncoalesced dict
|
||||
the minimum_count field is what we use to make sure our probs are never zero
|
||||
"""
|
||||
coalesced_dict = {}
|
||||
|
||||
# we are already iterating so for no reason because performance really doesn't matter let's count the keys ourselves
|
||||
num_keys = 0
|
||||
|
||||
# first pass coalesce
|
||||
for model_key, model_dict in uncoalesced_model_dict.items():
|
||||
num_keys += 1
|
||||
for calculation in model_dict.values():
|
||||
if model_key not in coalesced_dict:
|
||||
coalesced_dict[model_key] = {
|
||||
"_model_key_dict": calculation["_model_key_dict"].copy(),
|
||||
"calculations_coalesced": 0,
|
||||
"count": 0,
|
||||
"success": 0,
|
||||
}
|
||||
sub_dict = coalesced_dict[model_key]
|
||||
sub_dict["calculations_coalesced"] += 1
|
||||
sub_dict["count"] += calculation["count"]
|
||||
sub_dict["success"] += calculation["success"]
|
||||
|
||||
# second pass do probability calculation
|
||||
|
||||
prior = 1 / num_keys
|
||||
_logger.info(f"Got {num_keys} model keys, so our prior will be {prior}")
|
||||
|
||||
total_weight = 0
|
||||
for coalesced_model_dict in coalesced_dict.values():
|
||||
model_weight = (
|
||||
max(minimum_count, coalesced_model_dict["success"])
|
||||
/ coalesced_model_dict["count"]
|
||||
) * prior
|
||||
total_weight += model_weight
|
||||
|
||||
total_prob = 0
|
||||
for coalesced_model_dict in coalesced_dict.values():
|
||||
model_weight = (
|
||||
max(minimum_count, coalesced_model_dict["success"])
|
||||
/ coalesced_model_dict["count"]
|
||||
)
|
||||
prob = model_weight * prior / total_weight
|
||||
coalesced_model_dict["prob"] = prob
|
||||
total_prob += prob
|
||||
|
||||
_logger.debug(
|
||||
f"Got a total probability of {total_prob}, which should be close to 1 up to float/rounding error"
|
||||
)
|
||||
return coalesced_dict
|
||||
|
||||
|
||||
def write_coalesced_dict(
|
||||
coalesced_output_filename: typing.Optional[str],
|
||||
coalesced_model_dict: typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]],
|
||||
):
|
||||
if coalesced_output_filename is None or coalesced_output_filename == "":
|
||||
_logger.warning("Not provided a uncoalesced filename, not going to try")
|
||||
return
|
||||
|
||||
first_value = next(iter(coalesced_model_dict.values()))
|
||||
model_field_names = set(first_value["_model_key_dict"].keys())
|
||||
_logger.info(f"Detected model field names {model_field_names}")
|
||||
|
||||
collected_fieldnames = list(model_field_names)
|
||||
collected_fieldnames.extend(["calculations_coalesced", "success", "count", "prob"])
|
||||
with open(coalesced_output_filename, "w", newline="") as coalesced_output_file:
|
||||
writer = csv.DictWriter(coalesced_output_file, fieldnames=collected_fieldnames)
|
||||
writer.writeheader()
|
||||
|
||||
for model_dict in coalesced_model_dict.values():
|
||||
row = model_dict["_model_key_dict"].copy()
|
||||
row.update(
|
||||
{
|
||||
"calculations_coalesced": model_dict["calculations_coalesced"],
|
||||
"success": model_dict["success"],
|
||||
"count": model_dict["count"],
|
||||
"prob": model_dict["prob"],
|
||||
}
|
||||
)
|
||||
writer.writerow(row)
|
99
deepdog/cli/probs/main.py
Normal file
99
deepdog/cli/probs/main.py
Normal file
@@ -0,0 +1,99 @@
|
||||
import logging
|
||||
import argparse
|
||||
import json
|
||||
import deepdog.cli.probs.args
|
||||
import deepdog.cli.probs.dicts
|
||||
import deepdog.results
|
||||
import deepdog.indexify
|
||||
import pathlib
|
||||
import tqdm
|
||||
import tqdm.contrib.logging
|
||||
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def set_up_logging(log_file: str):
|
||||
|
||||
log_pattern = "%(asctime)s | %(levelname)-7s | %(name)s:%(lineno)d | %(message)s"
|
||||
if log_file is None:
|
||||
handlers = [
|
||||
logging.StreamHandler(),
|
||||
]
|
||||
else:
|
||||
handlers = [logging.StreamHandler(), logging.FileHandler(log_file)]
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG,
|
||||
format=log_pattern,
|
||||
# it's okay to ignore this mypy error because who cares about logger handler types
|
||||
handlers=handlers, # type: ignore
|
||||
)
|
||||
logging.captureWarnings(True)
|
||||
|
||||
|
||||
def main(args: argparse.Namespace):
|
||||
"""
|
||||
Main function with passed in arguments and no additional logging setup in case we want to extract out later
|
||||
"""
|
||||
|
||||
with tqdm.contrib.logging.logging_redirect_tqdm():
|
||||
_logger.info(f"args: {args}")
|
||||
|
||||
try:
|
||||
if args.coalesced_keys:
|
||||
raise NotImplementedError(
|
||||
"Currently not supporting coalesced keys, but maybe in future"
|
||||
)
|
||||
except AttributeError:
|
||||
# we don't care if this is missing because we don't actually want it to be there
|
||||
pass
|
||||
|
||||
indexifier = None
|
||||
if args.indexify_json:
|
||||
with open(args.indexify_json, "r") as indexify_json_file:
|
||||
indexify_spec = json.load(indexify_json_file)
|
||||
indexify_data = indexify_spec["indexes"]
|
||||
if "seed_spec" in indexify_spec:
|
||||
seed_spec = indexify_spec["seed_spec"]
|
||||
indexify_data[seed_spec["field_name"]] = list(
|
||||
range(seed_spec["num_seeds"])
|
||||
)
|
||||
# _logger.debug(f"Indexifier data looks like {indexify_data}")
|
||||
indexifier = deepdog.indexify.Indexifier(indexify_data)
|
||||
|
||||
bayes_dir = pathlib.Path(args.bayesrun_directory)
|
||||
out_files = [f for f in bayes_dir.iterdir() if f.name.endswith("bayesrun.csv")]
|
||||
_logger.info(
|
||||
f"Reading {len(out_files)} bayesrun.csv files in directory {args.bayesrun_directory}"
|
||||
)
|
||||
# _logger.info(out_files)
|
||||
parsed_output_files = [
|
||||
deepdog.results.read_output_file(f, indexifier)
|
||||
for f in tqdm.tqdm(out_files, desc="reading files", leave=False)
|
||||
]
|
||||
|
||||
_logger.info("building uncoalesced dict")
|
||||
uncoalesced_dict = deepdog.cli.probs.dicts.build_model_dict(parsed_output_files)
|
||||
|
||||
if "uncoalesced_outfile" in args and args.uncoalesced_outfile:
|
||||
deepdog.cli.probs.dicts.write_uncoalesced_dict(
|
||||
args.uncoalesced_outfile, uncoalesced_dict
|
||||
)
|
||||
else:
|
||||
_logger.info("Skipping writing uncoalesced")
|
||||
|
||||
_logger.info("building coalesced dict")
|
||||
coalesced = deepdog.cli.probs.dicts.coalesced_dict(uncoalesced_dict)
|
||||
|
||||
if "coalesced_outfile" in args and args.coalesced_outfile:
|
||||
deepdog.cli.probs.dicts.write_coalesced_dict(
|
||||
args.coalesced_outfile, coalesced
|
||||
)
|
||||
else:
|
||||
_logger.info("Skipping writing coalesced")
|
||||
|
||||
|
||||
def wrapped_main():
|
||||
args = deepdog.cli.probs.args.parse_args()
|
||||
set_up_logging(args.log_file)
|
||||
main(args)
|
6
deepdog/direct_monte_carlo/__init__.py
Normal file
6
deepdog/direct_monte_carlo/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
from deepdog.direct_monte_carlo.direct_mc import (
|
||||
DirectMonteCarloRun,
|
||||
DirectMonteCarloConfig,
|
||||
)
|
||||
|
||||
__all__ = ["DirectMonteCarloRun", "DirectMonteCarloConfig"]
|
14
deepdog/direct_monte_carlo/compose_filter.py
Normal file
14
deepdog/direct_monte_carlo/compose_filter.py
Normal file
@@ -0,0 +1,14 @@
|
||||
from typing import Sequence
|
||||
from deepdog.direct_monte_carlo.direct_mc import DirectMonteCarloFilter
|
||||
import numpy
|
||||
|
||||
|
||||
class ComposedDMCFilter(DirectMonteCarloFilter):
|
||||
def __init__(self, filters: Sequence[DirectMonteCarloFilter]):
|
||||
self.filters = filters
|
||||
|
||||
def filter_samples(self, samples: numpy.ndarray) -> numpy.ndarray:
|
||||
current_sample = samples
|
||||
for filter in self.filters:
|
||||
current_sample = filter.filter_samples(current_sample)
|
||||
return current_sample
|
319
deepdog/direct_monte_carlo/direct_mc.py
Normal file
319
deepdog/direct_monte_carlo/direct_mc.py
Normal file
@@ -0,0 +1,319 @@
|
||||
import csv
|
||||
import pdme.model
|
||||
import pdme.measurement
|
||||
import pdme.measurement.input_types
|
||||
import pdme.subspace_simulation
|
||||
import datetime
|
||||
from typing import Tuple, Dict, NewType, Any, Sequence
|
||||
from dataclasses import dataclass
|
||||
import logging
|
||||
import numpy
|
||||
import numpy.random
|
||||
import pdme.util.fast_v_calc
|
||||
import multiprocessing
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DirectMonteCarloResult:
|
||||
successes: int
|
||||
monte_carlo_count: int
|
||||
likelihood: float
|
||||
model_name: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class DirectMonteCarloConfig:
|
||||
monte_carlo_count_per_cycle: int = 10000
|
||||
monte_carlo_cycles: int = 10
|
||||
target_success: int = 100
|
||||
max_monte_carlo_cycles_steps: int = 10
|
||||
monte_carlo_seed: int = 1234
|
||||
write_successes_to_file: bool = False
|
||||
tag: str = ""
|
||||
cap_core_count: int = 0 # 0 means cap at num cores - 1
|
||||
chunk_size: int = 50
|
||||
write_bayesrun_file = True
|
||||
# chunk size of some kind
|
||||
|
||||
|
||||
# Aliasing dict as a generic data container
|
||||
DirectMonteCarloData = NewType("DirectMonteCarloData", Dict[str, Any])
|
||||
|
||||
|
||||
class DirectMonteCarloFilter:
|
||||
"""
|
||||
Abstract class for filtering out samples matching some criteria. Initialise with data as needed,
|
||||
then filter out samples as needed.
|
||||
"""
|
||||
|
||||
def filter_samples(self, samples: numpy.ndarray) -> numpy.ndarray:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class DirectMonteCarloRun:
|
||||
"""
|
||||
A single model Direct Monte Carlo run, currently implemented only using single threading.
|
||||
An encapsulation of the steps needed for a Bayes run.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
model_name_pairs : Sequence[Tuple(str, pdme.model.DipoleModel)]
|
||||
The models to evaluate, with names
|
||||
|
||||
measurements: Sequence[pdme.measurement.DotRangeMeasurement]
|
||||
The measurements as dot ranges to use as the bounds for the Monte Carlo calculation.
|
||||
|
||||
monte_carlo_count_per_cycle: int
|
||||
The number of Monte Carlo iterations to use in a single cycle calculation.
|
||||
|
||||
monte_carlo_cycles: int
|
||||
The number of cycles to use in each step.
|
||||
Increasing monte_carlo_count_per_cycle increases memory usage (and runtime), while this increases runtime, allowing
|
||||
control over memory use.
|
||||
|
||||
target_success: int
|
||||
The number of successes to target before exiting early.
|
||||
Should likely be ~100 but can go higher to.
|
||||
|
||||
max_monte_carlo_cycles_steps: int
|
||||
The number of steps to use. Each step consists of monte_carlo_cycles cycles, each of which has monte_carlo_count_per_cycle iterations.
|
||||
|
||||
monte_carlo_seed: int
|
||||
The seed to use for the RNG.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_name_pairs: Sequence[Tuple[str, pdme.model.DipoleModel]],
|
||||
filter: DirectMonteCarloFilter,
|
||||
config: DirectMonteCarloConfig,
|
||||
):
|
||||
self.model_name_pairs = model_name_pairs
|
||||
|
||||
# self.measurements = measurements
|
||||
# self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||
|
||||
# self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
||||
# self.dot_inputs
|
||||
# )
|
||||
|
||||
self.config = config
|
||||
self.filter = filter
|
||||
# (
|
||||
# self.lows,
|
||||
# self.highs,
|
||||
# ) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||
# self.measurements
|
||||
# )
|
||||
|
||||
def _single_run(
|
||||
self, model_name_pair: Tuple[str, pdme.model.DipoleModel], seed
|
||||
) -> numpy.ndarray:
|
||||
rng = numpy.random.default_rng(seed)
|
||||
|
||||
_, model = model_name_pair
|
||||
# don't log here it's madness
|
||||
# _logger.info(f"Executing for model {model_name}")
|
||||
|
||||
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||
self.config.monte_carlo_count_per_cycle, -1, rng
|
||||
)
|
||||
|
||||
current_sample = sample_dipoles
|
||||
|
||||
return self.filter.filter_samples(current_sample)
|
||||
# for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
|
||||
|
||||
# if len(current_sample) < 1:
|
||||
# break
|
||||
# vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
|
||||
# numpy.array([di]), current_sample
|
||||
# )
|
||||
|
||||
# current_sample = current_sample[
|
||||
# numpy.all((vals > low) & (vals < high), axis=1)
|
||||
# ]
|
||||
# return current_sample
|
||||
|
||||
def _wrapped_single_run(self, args: Tuple):
|
||||
"""
|
||||
single run wrapped up for multiprocessing call.
|
||||
|
||||
takes in a tuple of arguments corresponding to
|
||||
(model_name_pair, seed)
|
||||
"""
|
||||
# here's where we do our work
|
||||
|
||||
model_name_pair, seed = args
|
||||
cycle_success_configs = self._single_run(model_name_pair, seed)
|
||||
cycle_success_count = len(cycle_success_configs)
|
||||
|
||||
return cycle_success_count
|
||||
|
||||
def execute_no_multiprocessing(self) -> Sequence[DirectMonteCarloResult]:
|
||||
|
||||
count_per_step = (
|
||||
self.config.monte_carlo_count_per_cycle * self.config.monte_carlo_cycles
|
||||
)
|
||||
seed_sequence = numpy.random.SeedSequence(self.config.monte_carlo_seed)
|
||||
|
||||
# core count etc. logic here
|
||||
|
||||
results = []
|
||||
for model_name_pair in self.model_name_pairs:
|
||||
step_count = 0
|
||||
total_success = 0
|
||||
total_count = 0
|
||||
|
||||
_logger.info(f"Working on model {model_name_pair[0]}")
|
||||
# This is probably where multiprocessing logic should go
|
||||
while (step_count < self.config.max_monte_carlo_cycles_steps) and (
|
||||
total_success < self.config.target_success
|
||||
):
|
||||
_logger.debug(f"Executing step {step_count}")
|
||||
for cycle_i, seed in enumerate(
|
||||
seed_sequence.spawn(self.config.monte_carlo_cycles)
|
||||
):
|
||||
# here's where we do our work
|
||||
cycle_success_configs = self._single_run(model_name_pair, seed)
|
||||
cycle_success_count = len(cycle_success_configs)
|
||||
if cycle_success_count > 0:
|
||||
_logger.debug(
|
||||
f"For cycle {cycle_i} received {cycle_success_count} successes"
|
||||
)
|
||||
# _logger.debug(cycle_success_configs)
|
||||
if self.config.write_successes_to_file:
|
||||
sorted_by_freq = numpy.array(
|
||||
[
|
||||
pdme.subspace_simulation.sort_array_of_dipoles_by_frequency(
|
||||
dipole_config
|
||||
)
|
||||
for dipole_config in cycle_success_configs
|
||||
]
|
||||
)
|
||||
dipole_count = numpy.array(cycle_success_configs).shape[1]
|
||||
for n in range(dipole_count):
|
||||
numpy.savetxt(
|
||||
f"{self.config.tag}_{step_count}_{cycle_i}_dipole_{n}.csv",
|
||||
sorted_by_freq[:, n],
|
||||
delimiter=",",
|
||||
)
|
||||
total_success += cycle_success_count
|
||||
_logger.debug(
|
||||
f"At end of step {step_count} have {total_success} successes"
|
||||
)
|
||||
step_count += 1
|
||||
total_count += count_per_step
|
||||
|
||||
results.append(
|
||||
DirectMonteCarloResult(
|
||||
successes=total_success,
|
||||
monte_carlo_count=total_count,
|
||||
likelihood=total_success / total_count,
|
||||
model_name=model_name_pair[0],
|
||||
)
|
||||
)
|
||||
return results
|
||||
|
||||
def execute(self) -> Sequence[DirectMonteCarloResult]:
|
||||
|
||||
# set starting execution timestamp
|
||||
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||
|
||||
count_per_step = (
|
||||
self.config.monte_carlo_count_per_cycle * self.config.monte_carlo_cycles
|
||||
)
|
||||
seed_sequence = numpy.random.SeedSequence(self.config.monte_carlo_seed)
|
||||
|
||||
# core count etc. logic here
|
||||
core_count = multiprocessing.cpu_count() - 1 or 1
|
||||
if (self.config.cap_core_count >= 1) and (
|
||||
self.config.cap_core_count < core_count
|
||||
):
|
||||
core_count = self.config.cap_core_count
|
||||
_logger.info(f"Using {core_count} cores")
|
||||
|
||||
results = []
|
||||
with multiprocessing.Pool(core_count) as pool:
|
||||
|
||||
for model_name_pair in self.model_name_pairs:
|
||||
_logger.info(f"Working on model {model_name_pair[0]}")
|
||||
# This is probably where multiprocessing logic should go
|
||||
|
||||
step_count = 0
|
||||
total_success = 0
|
||||
total_count = 0
|
||||
|
||||
while (step_count < self.config.max_monte_carlo_cycles_steps) and (
|
||||
total_success < self.config.target_success
|
||||
):
|
||||
|
||||
step_count += 1
|
||||
|
||||
_logger.debug(f"Executing step {step_count}")
|
||||
|
||||
seeds = seed_sequence.spawn(self.config.monte_carlo_cycles)
|
||||
|
||||
pool_results = sum(
|
||||
pool.imap_unordered(
|
||||
self._wrapped_single_run,
|
||||
[(model_name_pair, seed) for seed in seeds],
|
||||
self.config.chunk_size,
|
||||
)
|
||||
)
|
||||
_logger.debug(f"Pool results: {pool_results}")
|
||||
|
||||
total_success += pool_results
|
||||
total_count += count_per_step
|
||||
_logger.debug(
|
||||
f"At end of step {step_count} have {total_success} successes"
|
||||
)
|
||||
|
||||
results.append(
|
||||
DirectMonteCarloResult(
|
||||
successes=total_success,
|
||||
monte_carlo_count=total_count,
|
||||
likelihood=total_success / total_count,
|
||||
model_name=model_name_pair[0],
|
||||
)
|
||||
)
|
||||
|
||||
if self.config.write_bayesrun_file:
|
||||
|
||||
filename = (
|
||||
f"{timestamp}-{self.config.tag}.realdata.fast_filter.bayesrun.csv"
|
||||
)
|
||||
_logger.info(f"Going to write to file [{filename}]")
|
||||
# row: Dict[str, Union[int, float, str]] = {}
|
||||
row = {}
|
||||
|
||||
num_models = len(self.model_name_pairs)
|
||||
success_weight = sum(
|
||||
[
|
||||
(res.successes / res.monte_carlo_count) / num_models
|
||||
for res in results
|
||||
]
|
||||
)
|
||||
|
||||
for res in results:
|
||||
row.update(
|
||||
{
|
||||
f"{res.model_name}_success": res.successes,
|
||||
f"{res.model_name}_count": res.monte_carlo_count,
|
||||
f"{res.model_name}_prob": (
|
||||
res.successes / res.monte_carlo_count
|
||||
)
|
||||
/ (num_models * success_weight),
|
||||
}
|
||||
)
|
||||
_logger.info(f"Writing row {row}")
|
||||
fieldnames = list(row.keys())
|
||||
|
||||
with open(filename, "w", newline="") as outfile:
|
||||
writer = csv.DictWriter(outfile, fieldnames=fieldnames, dialect="unix")
|
||||
writer.writeheader()
|
||||
writer.writerow(row)
|
||||
|
||||
return results
|
211
deepdog/direct_monte_carlo/dmc_filters.py
Normal file
211
deepdog/direct_monte_carlo/dmc_filters.py
Normal file
@@ -0,0 +1,211 @@
|
||||
from numpy import ndarray
|
||||
from deepdog.direct_monte_carlo.direct_mc import DirectMonteCarloFilter
|
||||
from typing import Sequence
|
||||
import pdme.measurement
|
||||
import pdme.measurement.input_types
|
||||
import pdme.util.fast_nonlocal_spectrum
|
||||
import pdme.util.fast_v_calc
|
||||
import numpy
|
||||
|
||||
|
||||
class SingleDotPotentialFilter(DirectMonteCarloFilter):
|
||||
def __init__(self, measurements: Sequence[pdme.measurement.DotRangeMeasurement]):
|
||||
self.measurements = measurements
|
||||
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||
|
||||
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
||||
self.dot_inputs
|
||||
)
|
||||
(
|
||||
self.lows,
|
||||
self.highs,
|
||||
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||
self.measurements
|
||||
)
|
||||
|
||||
def filter_samples(self, samples: ndarray) -> ndarray:
|
||||
current_sample = samples
|
||||
for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
|
||||
|
||||
if len(current_sample) < 1:
|
||||
break
|
||||
vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
|
||||
numpy.array([di]), current_sample
|
||||
)
|
||||
|
||||
current_sample = current_sample[
|
||||
numpy.all((vals > low) & (vals < high), axis=1)
|
||||
]
|
||||
return current_sample
|
||||
|
||||
|
||||
class SingleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
||||
def __init__(self, measurements: Sequence[pdme.measurement.DotRangeMeasurement]):
|
||||
self.measurements = measurements
|
||||
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||
|
||||
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
||||
self.dot_inputs
|
||||
)
|
||||
(
|
||||
self.lows,
|
||||
self.highs,
|
||||
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||
self.measurements
|
||||
)
|
||||
|
||||
# oh no not this again
|
||||
def fast_s_spin_qubit_tarucha_apsd_dipoleses(
|
||||
self, dot_inputs: numpy.ndarray, dipoleses: numpy.ndarray
|
||||
) -> numpy.ndarray:
|
||||
"""
|
||||
No error correction here baby.
|
||||
"""
|
||||
|
||||
# We're going to annotate the indices on this class.
|
||||
# Let's define some indices:
|
||||
# A -> index of dipoleses configurations
|
||||
# j -> within a particular configuration, indexes dipole j
|
||||
# measurement_index -> if we have 100 frequencies for example, indexes which one of them it is
|
||||
# If we need to use numbers, let's use A -> 2, j -> 10, measurement_index -> 9 for consistency with
|
||||
# my other notes
|
||||
|
||||
# axes are [dipole_config_idx A, dipole_idx j, {px, py, pz}3]
|
||||
ps = dipoleses[:, :, 0:3]
|
||||
# axes are [dipole_config_idx A, dipole_idx j, {sx, sy, sz}3]
|
||||
ss = dipoleses[:, :, 3:6]
|
||||
# axes are [dipole_config_idx A, dipole_idx j, w], last axis is just 1
|
||||
ws = dipoleses[:, :, 6]
|
||||
|
||||
# dot_index is either 0 or 1 for dot1 or dot2
|
||||
# hopefully this adhoc grammar is making sense, with the explicit labelling of the values of the last axis in cartesian space
|
||||
# axes are [measurement_idx, {dot_index}, {rx, ry, rz}] where the inner {dot_index} is gone
|
||||
# [measurement_idx, cartesian3]
|
||||
rs = dot_inputs[:, 0:3]
|
||||
# axes are [measurement_idx]
|
||||
fs = dot_inputs[:, 3]
|
||||
|
||||
# first operation!
|
||||
# r1s has shape [measurement_idx, rxs]
|
||||
# None inserts an extra axis so the r1s[:, None] has shape
|
||||
# [measurement_idx, 1]([rxs]) with the last rxs hidden
|
||||
#
|
||||
# ss has shape [ A, j, {sx, sy, sz}3], so second term has shape [A, 1, j]([sxs])
|
||||
# these broadcast from right to left
|
||||
# [ measurement_idx, 1, rxs]
|
||||
# [A, 1, j, sxs]
|
||||
# resulting in [A, measurement_idx, j, cart3] sxs rxs are both cart3
|
||||
diffses = rs[:, None] - ss[:, None, :]
|
||||
|
||||
# norms takes out axis 3, the last one, giving [A, measurement_idx, j]
|
||||
norms = numpy.linalg.norm(diffses, axis=3)
|
||||
|
||||
# _logger.info(f"norms1: {norms1}")
|
||||
# _logger.info(f"norms1 shape: {norms1.shape}")
|
||||
#
|
||||
# diffses1 (A, measurement_idx, j, xs)
|
||||
# ps: (A, j, px)
|
||||
# result is (A, measurement_idx, j)
|
||||
# intermediate_dot_prod = numpy.einsum("abcd,acd->abc", diffses1, ps)
|
||||
# _logger.info(f"dot product shape: {intermediate_dot_prod.shape}")
|
||||
|
||||
# transpose makes it (j, measurement_idx, A)
|
||||
# transp_intermediate_dot_prod = numpy.transpose(numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**3))
|
||||
|
||||
# transpose of diffses has shape (xs, j, measurement_idx, A)
|
||||
# numpy.transpose(diffses1)
|
||||
# _logger.info(f"dot product shape: {transp_intermediate_dot_prod.shape}")
|
||||
|
||||
# inner transpose is (j, measurement_idx, A) * (xs, j, measurement_idx, A)
|
||||
# next transpose puts it back to (A, measurement_idx, j, xs)
|
||||
# p_dot_r_times_r_term = 3 * numpy.transpose(numpy.transpose(numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**3)) * numpy.transpose(diffses1))
|
||||
# _logger.info(f"p_dot_r_times_r_term: {p_dot_r_times_r_term.shape}")
|
||||
|
||||
# only x axis puts us at (A, measurement_idx, j)
|
||||
# p_dot_r_times_r_term_x_only = p_dot_r_times_r_term[:, :, :, 0]
|
||||
# _logger.info(f"p_dot_r_times_r_term_x_only.shape: {p_dot_r_times_r_term_x_only.shape}")
|
||||
|
||||
# now to complete the numerator we subtract the ps, which are (A, j, px):
|
||||
# slicing off the end gives us (A, j), so we newaxis to get (A, 1, j)
|
||||
# _logger.info(ps[:, numpy.newaxis, :, 0].shape)
|
||||
alphses = (
|
||||
(
|
||||
3
|
||||
* numpy.transpose(
|
||||
numpy.transpose(
|
||||
numpy.einsum("abcd,acd->abc", diffses, ps) / (norms**2)
|
||||
)
|
||||
* numpy.transpose(diffses)
|
||||
)[:, :, :, 0]
|
||||
)
|
||||
- ps[:, numpy.newaxis, :, 0]
|
||||
) / (norms**3)
|
||||
|
||||
bses = (
|
||||
2
|
||||
* numpy.pi
|
||||
* ws[:, None, :]
|
||||
/ ((2 * numpy.pi * fs[:, None]) ** 2 + 4 * ws[:, None, :] ** 2)
|
||||
)
|
||||
|
||||
return numpy.einsum("...j->...", alphses * alphses * bses)
|
||||
|
||||
def filter_samples(self, samples: ndarray) -> ndarray:
|
||||
current_sample = samples
|
||||
for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
|
||||
|
||||
if len(current_sample) < 1:
|
||||
break
|
||||
vals = self.fast_s_spin_qubit_tarucha_apsd_dipoleses(
|
||||
numpy.array([di]), current_sample
|
||||
)
|
||||
# _logger.info(vals)
|
||||
|
||||
current_sample = current_sample[
|
||||
numpy.all((vals > low) & (vals < high), axis=1)
|
||||
]
|
||||
# _logger.info(f"leaving with {len(current_sample)}")
|
||||
return current_sample
|
||||
|
||||
|
||||
class DoubleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
||||
def __init__(
|
||||
self,
|
||||
pair_phase_measurements: Sequence[pdme.measurement.DotPairRangeMeasurement],
|
||||
):
|
||||
self.pair_phase_measurements = pair_phase_measurements
|
||||
self.dot_pair_inputs = [
|
||||
(measure.r1, measure.r2, measure.f)
|
||||
for measure in self.pair_phase_measurements
|
||||
]
|
||||
self.dot_pair_inputs_array = (
|
||||
pdme.measurement.input_types.dot_pair_inputs_to_array(self.dot_pair_inputs)
|
||||
)
|
||||
(
|
||||
self.pair_phase_lows,
|
||||
self.pair_phase_highs,
|
||||
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||
self.pair_phase_measurements
|
||||
)
|
||||
|
||||
def filter_samples(self, samples: ndarray) -> ndarray:
|
||||
current_sample = samples
|
||||
|
||||
for pi, plow, phigh in zip(
|
||||
self.dot_pair_inputs_array, self.pair_phase_lows, self.pair_phase_highs
|
||||
):
|
||||
if len(current_sample) < 1:
|
||||
break
|
||||
|
||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
||||
pdme.util.fast_nonlocal_spectrum.fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
||||
numpy.array([pi]), current_sample
|
||||
)
|
||||
)
|
||||
current_sample = current_sample[
|
||||
numpy.all(
|
||||
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
|
||||
axis=1,
|
||||
)
|
||||
]
|
||||
return current_sample
|
58
deepdog/indexify/__init__.py
Normal file
58
deepdog/indexify/__init__.py
Normal file
@@ -0,0 +1,58 @@
|
||||
"""
|
||||
Probably should just include a way to handle the indexify function I reuse so much.
|
||||
|
||||
All about breaking an integer into a tuple of values from lists, which is useful because of how we do CHTC runs.
|
||||
"""
|
||||
import itertools
|
||||
import typing
|
||||
import logging
|
||||
import math
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# from https://stackoverflow.com/questions/5228158/cartesian-product-of-a-dictionary-of-lists
|
||||
def _dict_product(dicts):
|
||||
"""
|
||||
>>> list(dict_product(dict(number=[1,2], character='ab')))
|
||||
[{'character': 'a', 'number': 1},
|
||||
{'character': 'a', 'number': 2},
|
||||
{'character': 'b', 'number': 1},
|
||||
{'character': 'b', 'number': 2}]
|
||||
"""
|
||||
return list(dict(zip(dicts.keys(), x)) for x in itertools.product(*dicts.values()))
|
||||
|
||||
|
||||
class Indexifier:
|
||||
"""
|
||||
The order of keys is very important, but collections.OrderedDict is no longer needed in python 3.7.
|
||||
I think it's okay to rely on that.
|
||||
"""
|
||||
|
||||
def __init__(self, list_dict: typing.Dict[str, typing.Sequence]):
|
||||
self.dict = list_dict
|
||||
|
||||
def indexify(self, n: int) -> typing.Dict[str, typing.Any]:
|
||||
product_dict = _dict_product(self.dict)
|
||||
return product_dict[n]
|
||||
|
||||
def _indexify_indices(self, n: int) -> typing.Sequence[int]:
|
||||
"""
|
||||
legacy indexify from old scripts, copypast.
|
||||
could be used like
|
||||
>>> ret = {}
|
||||
>>> for k, i in zip(self.dict.keys(), self._indexify_indices):
|
||||
>>> ret[k] = self.dict[k][i]
|
||||
>>> return ret
|
||||
"""
|
||||
weights = [len(v) for v in self.dict.values()]
|
||||
N = math.prod(weights)
|
||||
curr_n = n
|
||||
curr_N = N
|
||||
out = []
|
||||
for w in weights[:-1]:
|
||||
# print(f"current: {curr_N}, {curr_n}, {curr_n // w}")
|
||||
curr_N = curr_N // w # should be int division anyway
|
||||
out.append(curr_n // curr_N)
|
||||
curr_n = curr_n % curr_N
|
||||
return out
|
@@ -66,6 +66,86 @@ def get_a_result_fast_filter_pairs(input) -> int:
|
||||
return len(current_sample)
|
||||
|
||||
|
||||
def get_a_result_fast_filter_potential_pair_phase_only(input) -> int:
|
||||
(
|
||||
model,
|
||||
pair_inputs,
|
||||
pair_phase_lows,
|
||||
pair_phase_highs,
|
||||
monte_carlo_count,
|
||||
seed,
|
||||
) = input
|
||||
|
||||
rng = numpy.random.default_rng(seed)
|
||||
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
||||
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||
monte_carlo_count, None, rng_to_use=rng
|
||||
)
|
||||
|
||||
current_sample = sample_dipoles
|
||||
|
||||
for pi, plow, phigh in zip(pair_inputs, pair_phase_lows, pair_phase_highs):
|
||||
if len(current_sample) < 1:
|
||||
break
|
||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
||||
pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
|
||||
numpy.array([pi]), current_sample
|
||||
)
|
||||
)
|
||||
|
||||
current_sample = current_sample[
|
||||
numpy.all(
|
||||
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
|
||||
axis=1,
|
||||
)
|
||||
]
|
||||
return len(current_sample)
|
||||
|
||||
|
||||
def get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only(input) -> int:
|
||||
(
|
||||
model,
|
||||
pair_inputs,
|
||||
pair_phase_lows,
|
||||
pair_phase_highs,
|
||||
monte_carlo_count,
|
||||
seed,
|
||||
) = input
|
||||
|
||||
rng = numpy.random.default_rng(seed)
|
||||
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
||||
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||
monte_carlo_count, None, rng_to_use=rng
|
||||
)
|
||||
|
||||
current_sample = sample_dipoles
|
||||
|
||||
for pi, plow, phigh in zip(pair_inputs, pair_phase_lows, pair_phase_highs):
|
||||
if len(current_sample) < 1:
|
||||
break
|
||||
|
||||
###
|
||||
# This should be abstracted out, but we're going to dump it here for time pressure's sake
|
||||
###
|
||||
# vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
||||
# pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
|
||||
# numpy.array([pi]), current_sample
|
||||
# )
|
||||
#
|
||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
||||
pdme.util.fast_nonlocal_spectrum.fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
||||
numpy.array([pi]), current_sample
|
||||
)
|
||||
)
|
||||
current_sample = current_sample[
|
||||
numpy.all(
|
||||
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
|
||||
axis=1,
|
||||
)
|
||||
]
|
||||
return len(current_sample)
|
||||
|
||||
|
||||
def get_a_result_fast_filter(input) -> int:
|
||||
model, dot_inputs, lows, highs, monte_carlo_count, seed = input
|
||||
|
||||
@@ -108,6 +188,11 @@ class RealSpectrumRun:
|
||||
|
||||
run_count: int
|
||||
The number of runs to do.
|
||||
|
||||
If pair_measurements is not None, uses pair measurement method (and single measurements too).
|
||||
If pair_phase_measurements is not None, ignores measurements and uses phase measurements _only_
|
||||
This is lazy design on my part.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@@ -125,6 +210,9 @@ class RealSpectrumRun:
|
||||
pair_measurements: Optional[
|
||||
Sequence[pdme.measurement.DotPairRangeMeasurement]
|
||||
] = None,
|
||||
pair_phase_measurements: Optional[
|
||||
Sequence[pdme.measurement.DotPairRangeMeasurement]
|
||||
] = None,
|
||||
) -> None:
|
||||
self.measurements = measurements
|
||||
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||
@@ -136,6 +224,8 @@ class RealSpectrumRun:
|
||||
if pair_measurements is not None:
|
||||
self.pair_measurements = pair_measurements
|
||||
self.use_pair_measurements = True
|
||||
self.use_pair_phase_measurements = False
|
||||
|
||||
self.dot_pair_inputs = [
|
||||
(measure.r1, measure.r2, measure.f)
|
||||
for measure in self.pair_measurements
|
||||
@@ -145,8 +235,22 @@ class RealSpectrumRun:
|
||||
self.dot_pair_inputs
|
||||
)
|
||||
)
|
||||
elif pair_phase_measurements is not None:
|
||||
self.use_pair_measurements = False
|
||||
self.use_pair_phase_measurements = True
|
||||
self.pair_phase_measurements = pair_phase_measurements
|
||||
self.dot_pair_inputs = [
|
||||
(measure.r1, measure.r2, measure.f)
|
||||
for measure in self.pair_phase_measurements
|
||||
]
|
||||
self.dot_pair_inputs_array = (
|
||||
pdme.measurement.input_types.dot_pair_inputs_to_array(
|
||||
self.dot_pair_inputs
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.use_pair_measurements = False
|
||||
self.use_pair_phase_measurements = False
|
||||
|
||||
self.models = [model for (_, model) in models_with_names]
|
||||
self.model_names = [name for (name, _) in models_with_names]
|
||||
@@ -198,6 +302,16 @@ class RealSpectrumRun:
|
||||
self.pair_measurements
|
||||
)
|
||||
|
||||
pair_phase_lows = None
|
||||
pair_phase_highs = None
|
||||
if self.use_pair_phase_measurements:
|
||||
(
|
||||
pair_phase_lows,
|
||||
pair_phase_highs,
|
||||
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||
self.pair_phase_measurements
|
||||
)
|
||||
|
||||
# define a new seed sequence for each run
|
||||
seed_sequence = numpy.random.SeedSequence(self.initial_seed)
|
||||
|
||||
@@ -229,6 +343,7 @@ class RealSpectrumRun:
|
||||
seeds = seed_sequence.spawn(self.monte_carlo_cycles)
|
||||
|
||||
if self.use_pair_measurements:
|
||||
_logger.debug("using pair measurements")
|
||||
current_success = sum(
|
||||
pool.imap_unordered(
|
||||
get_a_result_fast_filter_pairs,
|
||||
@@ -249,6 +364,26 @@ class RealSpectrumRun:
|
||||
self.chunksize,
|
||||
)
|
||||
)
|
||||
elif self.use_pair_phase_measurements:
|
||||
_logger.debug("using pair phase measurements")
|
||||
_logger.debug("specifically using tarucha")
|
||||
current_success = sum(
|
||||
pool.imap_unordered(
|
||||
get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only,
|
||||
[
|
||||
(
|
||||
model,
|
||||
self.dot_pair_inputs_array,
|
||||
pair_phase_lows,
|
||||
pair_phase_highs,
|
||||
self.monte_carlo_count,
|
||||
seed,
|
||||
)
|
||||
for seed in seeds
|
||||
],
|
||||
self.chunksize,
|
||||
)
|
||||
)
|
||||
else:
|
||||
|
||||
current_success = sum(
|
||||
|
170
deepdog/results/__init__.py
Normal file
170
deepdog/results/__init__.py
Normal file
@@ -0,0 +1,170 @@
|
||||
import dataclasses
|
||||
import re
|
||||
import typing
|
||||
import logging
|
||||
import deepdog.indexify
|
||||
import pathlib
|
||||
import csv
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
FILENAME_REGEX = r"(?P<timestamp>\d{8}-\d{6})-(?P<filename_slug>.*)\.realdata\.fast_filter\.bayesrun\.csv"
|
||||
|
||||
MODEL_REGEXES = [
|
||||
r"geom_(?P<xmin>-?\d+)_(?P<xmax>-?\d+)_(?P<ymin>-?\d+)_(?P<ymax>-?\d+)_(?P<zmin>-?\d+)_(?P<zmax>-?\d+)-orientation_(?P<orientation>free|fixedxy|fixedz)-dipole_count_(?P<avg_filled>\d+)_(?P<field_name>\w*)"
|
||||
]
|
||||
|
||||
FILE_SLUG_REGEXES = [
|
||||
r"mock_tarucha-(?P<job_index>\d+)",
|
||||
r"(?:(?P<mock>mock)_)?tarucha(?:_(?P<tarucha_run_id>\d+))?-(?P<job_index>\d+)",
|
||||
]
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class BayesrunOutputFilename:
|
||||
timestamp: str
|
||||
filename_slug: str
|
||||
path: pathlib.Path
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class BayesrunColumnParsed:
|
||||
"""
|
||||
class for parsing a bayesrun while pulling certain special fields out
|
||||
"""
|
||||
|
||||
def __init__(self, groupdict: typing.Dict[str, str]):
|
||||
self.column_field = groupdict["field_name"]
|
||||
self.model_field_dict = {
|
||||
k: v for k, v in groupdict.items() if k != "field_name"
|
||||
}
|
||||
|
||||
def __str__(self):
|
||||
return f"BayesrunColumnParsed[{self.column_field}: {self.model_field_dict}]"
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class BayesrunModelResult:
|
||||
parsed_model_keys: typing.Dict[str, str]
|
||||
success: int
|
||||
count: int
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class BayesrunOutput:
|
||||
filename: BayesrunOutputFilename
|
||||
data: typing.Dict["str", typing.Any]
|
||||
results: typing.Sequence[BayesrunModelResult]
|
||||
|
||||
|
||||
def _batch_iterable_into_chunks(iterable, n=1):
|
||||
"""
|
||||
utility for batching bayesrun files where columns appear in threes
|
||||
"""
|
||||
for ndx in range(0, len(iterable), n):
|
||||
yield iterable[ndx : min(ndx + n, len(iterable))]
|
||||
|
||||
|
||||
def _parse_bayesrun_column(
|
||||
column: str,
|
||||
) -> typing.Optional[BayesrunColumnParsed]:
|
||||
"""
|
||||
Tries one by one all of a predefined list of regexes that I might have used in the past.
|
||||
Returns the groupdict for the first match, or None if no match found.
|
||||
"""
|
||||
for pattern in MODEL_REGEXES:
|
||||
match = re.match(pattern, column)
|
||||
if match:
|
||||
return BayesrunColumnParsed(match.groupdict())
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def _parse_bayesrun_row(
|
||||
row: typing.Dict[str, str],
|
||||
) -> typing.Sequence[BayesrunModelResult]:
|
||||
|
||||
results = []
|
||||
batched_keys = _batch_iterable_into_chunks(list(row.keys()), 3)
|
||||
for model_keys in batched_keys:
|
||||
parsed = [_parse_bayesrun_column(column) for column in model_keys]
|
||||
values = [row[column] for column in model_keys]
|
||||
if parsed[0] is None:
|
||||
raise ValueError(f"no viable success row found for keys {model_keys}")
|
||||
if parsed[1] is None:
|
||||
raise ValueError(f"no viable count row found for keys {model_keys}")
|
||||
if parsed[0].column_field != "success":
|
||||
raise ValueError(f"The column {model_keys[0]} is not a success field")
|
||||
if parsed[1].column_field != "count":
|
||||
raise ValueError(f"The column {model_keys[1]} is not a count field")
|
||||
parsed_keys = parsed[0].model_field_dict
|
||||
success = int(values[0])
|
||||
count = int(values[1])
|
||||
results.append(
|
||||
BayesrunModelResult(
|
||||
parsed_model_keys=parsed_keys,
|
||||
success=success,
|
||||
count=count,
|
||||
)
|
||||
)
|
||||
return results
|
||||
|
||||
|
||||
def _parse_output_filename(file: pathlib.Path) -> BayesrunOutputFilename:
|
||||
filename = file.name
|
||||
match = re.match(FILENAME_REGEX, filename)
|
||||
if not match:
|
||||
raise ValueError(f"{filename} was not a valid bayesrun output")
|
||||
groups = match.groupdict()
|
||||
return BayesrunOutputFilename(
|
||||
timestamp=groups["timestamp"], filename_slug=groups["filename_slug"], path=file
|
||||
)
|
||||
|
||||
|
||||
def _parse_file_slug(slug: str) -> typing.Optional[typing.Dict[str, str]]:
|
||||
for pattern in FILE_SLUG_REGEXES:
|
||||
match = re.match(pattern, slug)
|
||||
if match:
|
||||
return match.groupdict()
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def read_output_file(
|
||||
file: pathlib.Path, indexifier: typing.Optional[deepdog.indexify.Indexifier]
|
||||
) -> BayesrunOutput:
|
||||
|
||||
parsed_filename = tag = _parse_output_filename(file)
|
||||
out = BayesrunOutput(filename=parsed_filename, data={}, results=[])
|
||||
|
||||
out.data.update(dataclasses.asdict(tag))
|
||||
parsed_tag = _parse_file_slug(parsed_filename.filename_slug)
|
||||
if parsed_tag is None:
|
||||
_logger.warning(
|
||||
f"Could not parse {tag} against any matching regexes. Going to skip tag parsing"
|
||||
)
|
||||
else:
|
||||
out.data.update(parsed_tag)
|
||||
if indexifier is not None:
|
||||
try:
|
||||
job_index = parsed_tag["job_index"]
|
||||
indexified = indexifier.indexify(int(job_index))
|
||||
out.data.update(indexified)
|
||||
except KeyError:
|
||||
# This isn't really that important of an error, apart from the warning
|
||||
_logger.warning(
|
||||
f"Parsed tag to {parsed_tag}, and attempted to indexify but no job_index key was found. skipping and moving on"
|
||||
)
|
||||
|
||||
with file.open() as input_file:
|
||||
reader = csv.DictReader(input_file)
|
||||
rows = [r for r in reader]
|
||||
if len(rows) == 1:
|
||||
row = rows[0]
|
||||
else:
|
||||
raise ValueError(f"Confused about having multiple rows in {file.name}")
|
||||
results = _parse_bayesrun_row(row)
|
||||
|
||||
out.results = results
|
||||
|
||||
return out
|
@@ -17,6 +17,7 @@ class SubsetSimulationResult:
|
||||
over_target_likelihood: Optional[float]
|
||||
under_target_cost: Optional[float]
|
||||
under_target_likelihood: Optional[float]
|
||||
lowest_likelihood: Optional[float]
|
||||
|
||||
|
||||
class SubsetSimulation:
|
||||
@@ -37,6 +38,9 @@ class SubsetSimulation:
|
||||
default_r_step=0.01,
|
||||
default_w_log_step=0.01,
|
||||
default_upper_w_log_step=4,
|
||||
keep_probs_list=True,
|
||||
dump_last_generation_to_file=False,
|
||||
initial_cost_chunk_size=100,
|
||||
):
|
||||
name, model = model_name_pair
|
||||
self.model_name = name
|
||||
@@ -79,6 +83,11 @@ class SubsetSimulation:
|
||||
self.target_cost = target_cost
|
||||
_logger.info(f"will stop at target cost {target_cost}")
|
||||
|
||||
self.keep_probs_list = keep_probs_list
|
||||
self.dump_last_generations = dump_last_generation_to_file
|
||||
|
||||
self.initial_cost_chunk_size = initial_cost_chunk_size
|
||||
|
||||
def execute(self) -> SubsetSimulationResult:
|
||||
|
||||
probs_list = []
|
||||
@@ -90,7 +99,20 @@ class SubsetSimulation:
|
||||
)
|
||||
# _logger.debug(sample_dipoles)
|
||||
# _logger.debug(sample_dipoles.shape)
|
||||
costs = self.cost_function_to_use(sample_dipoles)
|
||||
|
||||
raw_costs = []
|
||||
_logger.debug(
|
||||
f"Using iterated cost function thing with chunk size {self.initial_cost_chunk_size}"
|
||||
)
|
||||
|
||||
for x in range(0, len(sample_dipoles), self.initial_cost_chunk_size):
|
||||
_logger.debug(f"doing chunk {x}")
|
||||
raw_costs.extend(
|
||||
self.cost_function_to_use(
|
||||
sample_dipoles[x : x + self.initial_cost_chunk_size]
|
||||
)
|
||||
)
|
||||
costs = numpy.array(raw_costs)
|
||||
|
||||
_logger.debug(f"costs: {costs}")
|
||||
sorted_indexes = costs.argsort()[::-1]
|
||||
@@ -114,27 +136,73 @@ class SubsetSimulation:
|
||||
mcmc_rng = numpy.random.default_rng(self.mcmc_seed)
|
||||
|
||||
for i in range(self.m_max):
|
||||
next_seeds = all_chains[-self.n_c:]
|
||||
next_seeds = all_chains[-self.n_c :]
|
||||
|
||||
for cost_index, cost_chain in enumerate(all_chains[: -self.n_c]):
|
||||
probs_list.append(
|
||||
(
|
||||
((self.n_c * self.n_s - cost_index) / (self.n_c * self.n_s))
|
||||
/ (self.n_s ** (i)),
|
||||
cost_chain[0],
|
||||
i + 1,
|
||||
if self.dump_last_generations:
|
||||
_logger.info("writing out csv file")
|
||||
next_dipoles_seed_dipoles = numpy.array([n[1] for n in next_seeds])
|
||||
for n in range(self.model.n):
|
||||
_logger.info(f"{next_dipoles_seed_dipoles[:, n].shape}")
|
||||
numpy.savetxt(
|
||||
f"generation_{self.n_c}_{self.n_s}_{i}_dipole_{n}.csv",
|
||||
next_dipoles_seed_dipoles[:, n],
|
||||
delimiter=",",
|
||||
)
|
||||
|
||||
next_seeds_as_array = numpy.array([s for _, s in next_seeds])
|
||||
stdevs = self.get_stdevs_from_arrays(next_seeds_as_array)
|
||||
_logger.info(f"got stdevs: {stdevs.stdevs}")
|
||||
all_long_chains = []
|
||||
for seed_index, (c, s) in enumerate(
|
||||
next_seeds[:: len(next_seeds) // 20]
|
||||
):
|
||||
# chain = mcmc(s, threshold_cost, n_s, model, dot_inputs_array, actual_measurement_array, mcmc_rng, curr_cost=c, stdevs=stdevs)
|
||||
# until new version gotta do
|
||||
_logger.debug(f"\t{seed_index}: doing long chain on the next seed")
|
||||
|
||||
long_chain = self.model.get_mcmc_chain(
|
||||
s,
|
||||
self.cost_function_to_use,
|
||||
1000,
|
||||
threshold_cost,
|
||||
stdevs,
|
||||
initial_cost=c,
|
||||
rng_arg=mcmc_rng,
|
||||
)
|
||||
for _, chained in long_chain:
|
||||
all_long_chains.append(chained)
|
||||
all_long_chains_array = numpy.array(all_long_chains)
|
||||
for n in range(self.model.n):
|
||||
_logger.info(f"{all_long_chains_array[:, n].shape}")
|
||||
numpy.savetxt(
|
||||
f"long_chain_generation_{self.n_c}_{self.n_s}_{i}_dipole_{n}.csv",
|
||||
all_long_chains_array[:, n],
|
||||
delimiter=",",
|
||||
)
|
||||
|
||||
if self.keep_probs_list:
|
||||
for cost_index, cost_chain in enumerate(all_chains[: -self.n_c]):
|
||||
probs_list.append(
|
||||
(
|
||||
((self.n_c * self.n_s - cost_index) / (self.n_c * self.n_s))
|
||||
/ (self.n_s ** (i)),
|
||||
cost_chain[0],
|
||||
i + 1,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
next_seeds_as_array = numpy.array([s for _, s in next_seeds])
|
||||
|
||||
stdevs = self.get_stdevs_from_arrays(next_seeds_as_array)
|
||||
_logger.info(f"got stdevs: {stdevs.stdevs}")
|
||||
|
||||
_logger.debug("Starting the MCMC")
|
||||
all_chains = []
|
||||
for c, s in next_seeds:
|
||||
for seed_index, (c, s) in enumerate(next_seeds):
|
||||
# chain = mcmc(s, threshold_cost, n_s, model, dot_inputs_array, actual_measurement_array, mcmc_rng, curr_cost=c, stdevs=stdevs)
|
||||
# until new version gotta do
|
||||
_logger.debug(
|
||||
f"\t{seed_index}: getting another chain from the next seed"
|
||||
)
|
||||
chain = self.model.get_mcmc_chain(
|
||||
s,
|
||||
self.cost_function_to_use,
|
||||
@@ -147,13 +215,14 @@ class SubsetSimulation:
|
||||
for cost, chained in chain:
|
||||
try:
|
||||
filtered_cost = cost[0]
|
||||
except IndexError:
|
||||
except (IndexError, TypeError):
|
||||
filtered_cost = cost
|
||||
all_chains.append((filtered_cost, chained))
|
||||
|
||||
_logger.debug("finished mcmc")
|
||||
# _logger.debug(all_chains)
|
||||
|
||||
all_chains.sort(key=lambda c: c[0], reverse=True)
|
||||
_logger.debug("finished sorting all_chains")
|
||||
|
||||
threshold_cost = all_chains[-self.n_c][0]
|
||||
_logger.info(
|
||||
@@ -169,14 +238,18 @@ class SubsetSimulation:
|
||||
|
||||
shorter_probs_list = []
|
||||
for cost_index, cost_chain in enumerate(all_chains):
|
||||
probs_list.append(
|
||||
(
|
||||
((self.n_c * self.n_s - cost_index) / (self.n_c * self.n_s))
|
||||
/ (self.n_s ** (i)),
|
||||
cost_chain[0],
|
||||
i + 1,
|
||||
if self.keep_probs_list:
|
||||
probs_list.append(
|
||||
(
|
||||
(
|
||||
(self.n_c * self.n_s - cost_index)
|
||||
/ (self.n_c * self.n_s)
|
||||
)
|
||||
/ (self.n_s ** (i)),
|
||||
cost_chain[0],
|
||||
i + 1,
|
||||
)
|
||||
)
|
||||
)
|
||||
shorter_probs_list.append(
|
||||
(
|
||||
cost_chain[0],
|
||||
@@ -191,21 +264,23 @@ class SubsetSimulation:
|
||||
over_target_likelihood=shorter_probs_list[over_index - 1][1],
|
||||
under_target_cost=shorter_probs_list[over_index][0],
|
||||
under_target_likelihood=shorter_probs_list[over_index][1],
|
||||
lowest_likelihood=shorter_probs_list[-1][1],
|
||||
)
|
||||
return result
|
||||
|
||||
# _logger.debug([c[0] for c in all_chains[-n_c:]])
|
||||
_logger.info(f"doing level {i + 1}")
|
||||
|
||||
for cost_index, cost_chain in enumerate(all_chains):
|
||||
probs_list.append(
|
||||
(
|
||||
((self.n_c * self.n_s - cost_index) / (self.n_c * self.n_s))
|
||||
/ (self.n_s ** (self.m_max)),
|
||||
cost_chain[0],
|
||||
self.m_max + 1,
|
||||
if self.keep_probs_list:
|
||||
for cost_index, cost_chain in enumerate(all_chains):
|
||||
probs_list.append(
|
||||
(
|
||||
((self.n_c * self.n_s - cost_index) / (self.n_c * self.n_s))
|
||||
/ (self.n_s ** (self.m_max)),
|
||||
cost_chain[0],
|
||||
self.m_max + 1,
|
||||
)
|
||||
)
|
||||
)
|
||||
threshold_cost = all_chains[-self.n_c][0]
|
||||
_logger.info(
|
||||
f"final threshold cost: {threshold_cost}, at P = (1 / {self.n_s})^{self.m_max + 1}"
|
||||
@@ -215,12 +290,16 @@ class SubsetSimulation:
|
||||
# for prob, prob_cost in probs_list:
|
||||
# _logger.info(f"\t{prob}: {prob_cost}")
|
||||
probs_list.sort(key=lambda c: c[0], reverse=True)
|
||||
|
||||
min_likelihood = ((1) / (self.n_c * self.n_s)) / (self.n_s ** (self.m_max))
|
||||
|
||||
result = SubsetSimulationResult(
|
||||
probs_list=probs_list,
|
||||
over_target_cost=None,
|
||||
over_target_likelihood=None,
|
||||
under_target_cost=None,
|
||||
under_target_likelihood=None,
|
||||
lowest_likelihood=min_likelihood,
|
||||
)
|
||||
return result
|
||||
|
||||
|
38
do.sh
38
do.sh
@@ -1,38 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Do - The Simplest Build Tool on Earth.
|
||||
# Documentation and examples see https://github.com/8gears/do
|
||||
|
||||
set -Eeuo pipefail # -e "Automatic exit from bash shell script on error" -u "Treat unset variables and parameters as errors"
|
||||
|
||||
build() {
|
||||
echo "I am ${FUNCNAME[0]}ing"
|
||||
poetry build
|
||||
}
|
||||
|
||||
test() {
|
||||
echo "I am ${FUNCNAME[0]}ing"
|
||||
poetry run flake8 deepdog tests
|
||||
poetry run mypy deepdog
|
||||
poetry run pytest
|
||||
}
|
||||
|
||||
fmt() {
|
||||
poetry run black .
|
||||
find . -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
||||
}
|
||||
|
||||
release() {
|
||||
./scripts/release.sh
|
||||
}
|
||||
|
||||
htmlcov() {
|
||||
poetry run pytest --cov-report=html
|
||||
}
|
||||
|
||||
all() {
|
||||
build && test
|
||||
}
|
||||
|
||||
"$@" # <- execute the task
|
||||
|
||||
[ "$#" -gt 0 ] || printf "Usage:\n\t./do.sh %s\n" "($(compgen -A function | grep '^[^_]' | paste -sd '|' -))"
|
145
flake.lock
generated
145
flake.lock
generated
@@ -1,28 +1,33 @@
|
||||
{
|
||||
"nodes": {
|
||||
"flake-utils": {
|
||||
"inputs": {
|
||||
"systems": "systems"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1648297722,
|
||||
"narHash": "sha256-W+qlPsiZd8F3XkzXOzAoR+mpFqzm3ekQkJNa+PIh1BQ=",
|
||||
"lastModified": 1710146030,
|
||||
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
|
||||
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-utils_2": {
|
||||
"inputs": {
|
||||
"systems": "systems_2"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1653893745,
|
||||
"narHash": "sha256-0jntwV3Z8//YwuOjzhV2sgJJPt+HY6KhU7VZUL0fKZQ=",
|
||||
"lastModified": 1705309234,
|
||||
"narHash": "sha256-uNRRNRKmJyCRC/8y1RqBkqWBLM034y4qN7EprSdmgyA=",
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "1ed9fb1935d260de5fe1c2f7ee0ebaae17ed2fa1",
|
||||
"rev": "1ef2e671c3b0c19053962c07dbda38332dcebf26",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -31,29 +36,34 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nix-github-actions": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"poetry2nixSrc",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1703863825,
|
||||
"narHash": "sha256-rXwqjtwiGKJheXB43ybM8NwWB8rO2dSRrEqes0S7F5Y=",
|
||||
"owner": "nix-community",
|
||||
"repo": "nix-github-actions",
|
||||
"rev": "5163432afc817cf8bd1f031418d1869e4c9d5547",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-community",
|
||||
"repo": "nix-github-actions",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1655087213,
|
||||
"narHash": "sha256-4R5oQ+OwGAAcXWYrxC4gFMTUSstGxaN8kN7e8hkum/8=",
|
||||
"lastModified": 1710703777,
|
||||
"narHash": "sha256-M4CNAgjrtvrxIWIAc98RTYcVFoAgwUhrYekeiMScj18=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixpkgs_2": {
|
||||
"locked": {
|
||||
"lastModified": 1655046959,
|
||||
"narHash": "sha256-gxqHZKq1ReLDe6ZMJSbmSZlLY95DsVq5o6jQihhzvmw=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "07bf3d25ce1da3bee6703657e6a787a4c6cdcea9",
|
||||
"rev": "fc7885fbcea4b782142e06ce2d4d08cf92862004",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -62,23 +72,27 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"poetry2nix": {
|
||||
"poetry2nixSrc": {
|
||||
"inputs": {
|
||||
"flake-utils": "flake-utils_2",
|
||||
"nixpkgs": "nixpkgs_2"
|
||||
"nix-github-actions": "nix-github-actions",
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
],
|
||||
"systems": "systems_3",
|
||||
"treefmt-nix": "treefmt-nix"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1654921554,
|
||||
"narHash": "sha256-hkfMdQAHSwLWlg0sBVvgrQdIiBP45U1/ktmFpY4g2Mo=",
|
||||
"lastModified": 1708589824,
|
||||
"narHash": "sha256-2GOiFTkvs5MtVF65sC78KNVxQSmsxtk0WmV1wJ9V2ck=",
|
||||
"owner": "nix-community",
|
||||
"repo": "poetry2nix",
|
||||
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
|
||||
"rev": "3c92540611f42d3fb2d0d084a6c694cd6544b609",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-community",
|
||||
"repo": "poetry2nix",
|
||||
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
@@ -86,7 +100,72 @@
|
||||
"inputs": {
|
||||
"flake-utils": "flake-utils",
|
||||
"nixpkgs": "nixpkgs",
|
||||
"poetry2nix": "poetry2nix"
|
||||
"poetry2nixSrc": "poetry2nixSrc"
|
||||
}
|
||||
},
|
||||
"systems": {
|
||||
"locked": {
|
||||
"lastModified": 1681028828,
|
||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"systems_2": {
|
||||
"locked": {
|
||||
"lastModified": 1681028828,
|
||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"systems_3": {
|
||||
"locked": {
|
||||
"lastModified": 1681028828,
|
||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"id": "systems",
|
||||
"type": "indirect"
|
||||
}
|
||||
},
|
||||
"treefmt-nix": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"poetry2nixSrc",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1708335038,
|
||||
"narHash": "sha256-ETLZNFBVCabo7lJrpjD6cAbnE11eDOjaQnznmg/6hAE=",
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"rev": "e504621290a1fd896631ddbc5e9c16f4366c9f65",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"type": "github"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
97
flake.nix
97
flake.nix
@@ -1,63 +1,46 @@
|
||||
{
|
||||
description = "Application packaged using poetry2nix";
|
||||
|
||||
inputs.flake-utils.url = "github:numtide/flake-utils?rev=0f8662f1319ad6abf89b3380dd2722369fc51ade";
|
||||
inputs.nixpkgs.url = "github:NixOS/nixpkgs?rev=37b6b161e536fddca54424cf80662bce735bdd1e";
|
||||
inputs.poetry2nix.url = "github:nix-community/poetry2nix?rev=7b71679fa7df00e1678fc3f1d1d4f5f372341b63";
|
||||
inputs.flake-utils.url = "github:numtide/flake-utils";
|
||||
inputs.nixpkgs.url = "github:NixOS/nixpkgs";
|
||||
inputs.poetry2nixSrc = {
|
||||
url = "github:nix-community/poetry2nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
|
||||
outputs = { self, nixpkgs, flake-utils, poetry2nix }:
|
||||
{
|
||||
# Nixpkgs overlay providing the application
|
||||
overlay = nixpkgs.lib.composeManyExtensions [
|
||||
poetry2nix.overlay
|
||||
(final: prev: {
|
||||
# The application
|
||||
deepdog = prev.poetry2nix.mkPoetryApplication {
|
||||
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
|
||||
# …
|
||||
# workaround https://github.com/nix-community/poetry2nix/issues/568
|
||||
pdme = super.pdme.overridePythonAttrs (old: {
|
||||
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
|
||||
});
|
||||
});
|
||||
projectDir = ./.;
|
||||
};
|
||||
deepdogEnv = prev.poetry2nix.mkPoetryEnv {
|
||||
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
|
||||
# …
|
||||
# workaround https://github.com/nix-community/poetry2nix/issues/568
|
||||
pdme = super.pdme.overridePythonAttrs (old: {
|
||||
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
|
||||
});
|
||||
});
|
||||
projectDir = ./.;
|
||||
};
|
||||
})
|
||||
];
|
||||
} // (flake-utils.lib.eachDefaultSystem (system:
|
||||
outputs = { self, nixpkgs, flake-utils, poetry2nixSrc }:
|
||||
flake-utils.lib.eachDefaultSystem (system:
|
||||
let
|
||||
pkgs = import nixpkgs {
|
||||
inherit system;
|
||||
overlays = [ self.overlay ];
|
||||
};
|
||||
in
|
||||
{
|
||||
apps = {
|
||||
deepdog = pkgs.deepdog;
|
||||
};
|
||||
|
||||
defaultApp = pkgs.deepdog;
|
||||
devShell = pkgs.mkShell {
|
||||
buildInputs = [
|
||||
pkgs.poetry
|
||||
pkgs.deepdogEnv
|
||||
pkgs.deepdog
|
||||
];
|
||||
shellHook = ''
|
||||
export DO_NIX_CUSTOM=1
|
||||
'';
|
||||
packages = [ pkgs.nodejs-16_x ];
|
||||
};
|
||||
|
||||
}));
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
poetry2nix = poetry2nixSrc.lib.mkPoetry2Nix { inherit pkgs; };
|
||||
in {
|
||||
packages = {
|
||||
deepdogApp = poetry2nix.mkPoetryApplication {
|
||||
projectDir = self;
|
||||
python = pkgs.python39;
|
||||
preferWheels = true;
|
||||
};
|
||||
deepdogEnv = poetry2nix.mkPoetryEnv {
|
||||
projectDir = self;
|
||||
python = pkgs.python39;
|
||||
preferWheels = true;
|
||||
overrides = poetry2nix.overrides.withDefaults (self: super: {
|
||||
});
|
||||
};
|
||||
default = self.packages.${system}.deepdogEnv;
|
||||
};
|
||||
devShells.default = pkgs.mkShell {
|
||||
inputsFrom = [ self.packages.${system}.deepdogEnv ];
|
||||
buildInputs = [
|
||||
pkgs.poetry
|
||||
self.packages.${system}.deepdogEnv
|
||||
self.packages.${system}.deepdogApp
|
||||
pkgs.just
|
||||
];
|
||||
shellHook = ''
|
||||
export DO_NIX_CUSTOM=1
|
||||
'';
|
||||
};
|
||||
}
|
||||
);
|
||||
}
|
||||
|
60
justfile
Normal file
60
justfile
Normal file
@@ -0,0 +1,60 @@
|
||||
|
||||
# execute default build
|
||||
default: build
|
||||
|
||||
# builds the python module using poetry
|
||||
build:
|
||||
echo "building..."
|
||||
poetry build
|
||||
|
||||
# print a message displaying whether nix is being used
|
||||
checknix:
|
||||
#!/usr/bin/env bash
|
||||
set -euxo pipefail
|
||||
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
|
||||
echo "In an interactive nix env."
|
||||
else
|
||||
echo "Using poetry as runner, no nix detected."
|
||||
fi
|
||||
|
||||
# run all tests
|
||||
test: fmt
|
||||
#!/usr/bin/env bash
|
||||
set -euxo pipefail
|
||||
|
||||
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
|
||||
echo "testing, using nix..."
|
||||
flake8 deepdog tests
|
||||
mypy deepdog
|
||||
pytest
|
||||
else
|
||||
echo "testing..."
|
||||
poetry run flake8 deepdog tests
|
||||
poetry run mypy deepdog
|
||||
poetry run pytest
|
||||
fi
|
||||
|
||||
# format code
|
||||
fmt:
|
||||
#!/usr/bin/env bash
|
||||
set -euxo pipefail
|
||||
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
|
||||
black .
|
||||
else
|
||||
poetry run black .
|
||||
fi
|
||||
find deepdog -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
||||
find tests -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
||||
|
||||
# release the app, checking that our working tree is clean and ready for release, optionally takes target version
|
||||
release version="":
|
||||
#!/usr/bin/env bash
|
||||
set -euxo pipefail
|
||||
if [[ -n "{{version}}" ]]; then
|
||||
./scripts/release.sh {{version}}
|
||||
else
|
||||
./scripts/release.sh
|
||||
fi
|
||||
|
||||
htmlcov:
|
||||
poetry run pytest --cov-report=html
|
1000
poetry.lock
generated
1000
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,22 +1,27 @@
|
||||
[tool.poetry]
|
||||
name = "deepdog"
|
||||
version = "0.7.1"
|
||||
version = "1.0.0"
|
||||
description = ""
|
||||
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<3.10"
|
||||
pdme = "^0.9.1"
|
||||
pdme = "^1.0.0"
|
||||
numpy = "1.22.3"
|
||||
scipy = "1.10"
|
||||
tqdm = "^4.66.2"
|
||||
|
||||
[tool.poetry.dev-dependencies]
|
||||
pytest = ">=6"
|
||||
flake8 = "^4.0.1"
|
||||
pytest-cov = "^3.0.0"
|
||||
pytest-cov = "^4.1.0"
|
||||
mypy = "^0.971"
|
||||
python-semantic-release = "^7.24.0"
|
||||
black = "^22.3.0"
|
||||
syrupy = "^4.0.8"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
probs = "deepdog.cli.probs:wrapped_main"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core>=1.0.0"]
|
||||
@@ -37,6 +42,13 @@ module = [
|
||||
]
|
||||
ignore_missing_imports = true
|
||||
|
||||
[[tool.mypy.overrides]]
|
||||
module = [
|
||||
"tqdm",
|
||||
"tqdm.*"
|
||||
]
|
||||
ignore_missing_imports = true
|
||||
|
||||
[tool.semantic_release]
|
||||
version_toml = "pyproject.toml:tool.poetry.version"
|
||||
tag_format = "{version}"
|
||||
|
@@ -25,15 +25,22 @@ if [ -z "$(git status --porcelain)" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
std_version_args=()
|
||||
if [[ -n "${1:-}" ]]; then
|
||||
std_version_args+=( "--release-as" "$1" )
|
||||
echo "Parameter $1 was supplied, so we should use release-as"
|
||||
else
|
||||
echo "No release-as parameter specifed."
|
||||
fi
|
||||
# Working directory clean
|
||||
echo "Doing a dry run..."
|
||||
npx standard-version --dry-run
|
||||
npx standard-version --dry-run "${std_version_args[@]}"
|
||||
read -p "Does that look good? [y/N] " -n 1 -r
|
||||
echo # (optional) move to a new line
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
# do dangerous stuff
|
||||
npx standard-version
|
||||
npx standard-version "${std_version_args[@]}"
|
||||
git push --follow-tags origin master
|
||||
else
|
||||
echo "okay, never mind then..."
|
||||
|
@@ -1,4 +1,4 @@
|
||||
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d)(")/mg;
|
||||
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d+)(")/mg;
|
||||
|
||||
module.exports.readVersion = function (contents) {
|
||||
const result = pattern.exec(contents);
|
||||
|
177
tests/__snapshots__/test_bayes_run_with_ss.ambr
Normal file
177
tests/__snapshots__/test_bayes_run_with_ss.ambr
Normal file
@@ -0,0 +1,177 @@
|
||||
# serializer version: 1
|
||||
# name: test_basic_analysis
|
||||
list([
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.3333333333333333,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.3333333333333333,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.3333333333333333,
|
||||
'dipole_frequency_1': 0.006029931414230269,
|
||||
'dipole_frequency_2': 85436.78758379082,
|
||||
'dipole_location_1': array([-4.76615152, -6.33160296, 5.29522808]),
|
||||
'dipole_location_2': array([-4.72700391, -2.06478573, 6.52467702]),
|
||||
'dipole_moment_1': array([ 860.14181416, -450.27082062, -239.60852996]),
|
||||
'dipole_moment_2': array([ 908.18325588, -208.52681777, -362.93214244]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 0.45,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.3103448275862069,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.9,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.6206896551724138,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.06896551724137932,
|
||||
'dipole_frequency_1': 102275.63477261562,
|
||||
'dipole_frequency_2': 1755280.9783485082,
|
||||
'dipole_location_1': array([ 4.71515397, -9.70362197, 5.43016546]),
|
||||
'dipole_location_2': array([3.42476038, 3.88562934, 5.15034328]),
|
||||
'dipole_moment_1': array([-502.60742674, -790.60222587, 349.7626267 ]),
|
||||
'dipole_moment_2': array([-192.42708465, -434.81009148, -879.7226844 ]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 0.7,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.6631578947368421,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.18947368421052635,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.7,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.1473684210526316,
|
||||
'dipole_frequency_1': 2896.799464036654,
|
||||
'dipole_frequency_2': 9.980565189326681e-05,
|
||||
'dipole_location_1': array([-4.97465789, 12.54716531, 6.06324588]),
|
||||
'dipole_location_2': array([ 9.84518459, -11.1183876 , 7.35028226]),
|
||||
'dipole_moment_1': array([997.67961917, 19.6376112 , 65.19004305]),
|
||||
'dipole_moment_2': array([305.63093655, 440.57669389, 844.08643362]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.663157894736842,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.18947368421052635,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.1473684210526316,
|
||||
'dipole_frequency_1': 1.4522667818288244,
|
||||
'dipole_frequency_2': 2704.9795645301197,
|
||||
'dipole_location_1': array([ 7.38183022, 16.6745801 , 7.10428414]),
|
||||
'dipole_location_2': array([-8.15636906, -9.56609132, 6.34141559]),
|
||||
'dipole_moment_1': array([-145.9924693 , 738.74936496, 657.97839986]),
|
||||
'dipole_moment_2': array([-960.16113239, 104.96824669, -258.98314046]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 0.9,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.9465776293823038,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.030050083472454105,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.1,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.02337228714524208,
|
||||
'dipole_frequency_1': 3827.2315421318913,
|
||||
'dipole_frequency_2': 1.9301094166184413e-05,
|
||||
'dipole_location_1': array([ 5.02067673, -0.9783039 , 6.1431897 ]),
|
||||
'dipole_location_2': array([ 4.66628999, 10.80907459, 7.21771744]),
|
||||
'dipole_moment_1': array([ 871.30659253, -299.17389491, -388.99846068]),
|
||||
'dipole_moment_2': array([-189.87268624, 677.28285845, 710.79975568]),
|
||||
}),
|
||||
])
|
||||
# ---
|
||||
# name: test_bayesss_with_tighter_cost
|
||||
list([
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.33333333333333337,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.33333333333333337,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.33333333333333337,
|
||||
'dipole_frequency_1': 0.006029931414230269,
|
||||
'dipole_frequency_2': 85436.78758379082,
|
||||
'dipole_location_1': array([-4.76615152, -6.33160296, 5.29522808]),
|
||||
'dipole_location_2': array([-4.72700391, -2.06478573, 6.52467702]),
|
||||
'dipole_moment_1': array([ 860.14181416, -450.27082062, -239.60852996]),
|
||||
'dipole_moment_2': array([ 908.18325588, -208.52681777, -362.93214244]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 0.0109375,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.1044776119402985,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.03125,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.2985074626865672,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.0625,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.5970149253731344,
|
||||
'dipole_frequency_1': 102275.63477261562,
|
||||
'dipole_frequency_2': 1755280.9783485082,
|
||||
'dipole_location_1': array([ 4.71515397, -9.70362197, 5.43016546]),
|
||||
'dipole_location_2': array([3.42476038, 3.88562934, 5.15034328]),
|
||||
'dipole_moment_1': array([-502.60742674, -790.60222587, 349.7626267 ]),
|
||||
'dipole_moment_2': array([-192.42708465, -434.81009148, -879.7226844 ]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 7.291135021404688e-05,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.021875,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.4666326413699001,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.0125,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.5332944472798858,
|
||||
'dipole_frequency_1': 2896.799464036654,
|
||||
'dipole_frequency_2': 9.980565189326681e-05,
|
||||
'dipole_location_1': array([-4.97465789, 12.54716531, 6.06324588]),
|
||||
'dipole_location_2': array([ 9.84518459, -11.1183876 , 7.35028226]),
|
||||
'dipole_moment_1': array([997.67961917, 19.6376112 , 65.19004305]),
|
||||
'dipole_moment_2': array([305.63093655, 440.57669389, 844.08643362]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 7.291135021404688e-05,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.4666326413699001,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.5332944472798858,
|
||||
'dipole_frequency_1': 1.4522667818288244,
|
||||
'dipole_frequency_2': 2704.9795645301197,
|
||||
'dipole_location_1': array([ 7.38183022, 16.6745801 , 7.10428414]),
|
||||
'dipole_location_2': array([-8.15636906, -9.56609132, 6.34141559]),
|
||||
'dipole_moment_1': array([-145.9924693 , 738.74936496, 657.97839986]),
|
||||
'dipole_moment_2': array([-960.16113239, 104.96824669, -258.98314046]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 0.175,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 0.00012008361740869356,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.05625,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.24702915581216964,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.15,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.7528507605704217,
|
||||
'dipole_frequency_1': 3827.2315421318913,
|
||||
'dipole_frequency_2': 1.9301094166184413e-05,
|
||||
'dipole_location_1': array([ 5.02067673, -0.9783039 , 6.1431897 ]),
|
||||
'dipole_location_2': array([ 4.66628999, 10.80907459, 7.21771744]),
|
||||
'dipole_moment_1': array([ 871.30659253, -299.17389491, -388.99846068]),
|
||||
'dipole_moment_2': array([-189.87268624, 677.28285845, 710.79975568]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 4.9116305003549454e-08,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 0.0109375,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.11316396672817797,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.028125,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.886835984155517,
|
||||
'dipole_frequency_1': 1.1715179359592061e-05,
|
||||
'dipole_frequency_2': 0.0019103783276337497,
|
||||
'dipole_location_1': array([-0.95736547, 1.09273812, 7.47158641]),
|
||||
'dipole_location_2': array([ -3.18510322, -15.64493131, 5.81623624]),
|
||||
'dipole_moment_1': array([-184.64961369, 956.56786553, 225.57136075]),
|
||||
'dipole_moment_2': array([ -34.63395137, 801.17771816, -597.42342885]),
|
||||
}),
|
||||
dict({
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedxy-pfixexp_3-dipole_count_2_prob': 1.977090156727901e-10,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_likelihood': 9.765625e-06,
|
||||
'connors_geom-5height-orientation_fixedz-pfixexp_3-dipole_count_2_prob': 0.00045552157211010855,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_likelihood': 0.002734375,
|
||||
'connors_geom-5height-orientation_free-pfixexp_3-dipole_count_2_prob': 0.9995444782301809,
|
||||
'dipole_frequency_1': 999786.9069039805,
|
||||
'dipole_frequency_2': 186034.67996840767,
|
||||
'dipole_location_1': array([-5.59679125, 6.3411602 , 5.33602522]),
|
||||
'dipole_location_2': array([-0.03412955, -6.83522954, 5.58551513]),
|
||||
'dipole_moment_1': array([826.38270589, 491.81526944, 274.24325726]),
|
||||
'dipole_moment_2': array([ 202.74745884, -656.07483714, -726.95204519]),
|
||||
}),
|
||||
])
|
||||
# ---
|
0
tests/indexify/__init__.py
Normal file
0
tests/indexify/__init__.py
Normal file
12
tests/indexify/test_indexify.py
Normal file
12
tests/indexify/test_indexify.py
Normal file
@@ -0,0 +1,12 @@
|
||||
import deepdog.indexify
|
||||
import logging
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def test_indexifier():
|
||||
weight_dict = {"key_1": [1, 2, 3], "key_2": ["a", "b", "c"]}
|
||||
indexifier = deepdog.indexify.Indexifier(weight_dict)
|
||||
_logger.debug(f"setting up indexifier {indexifier}")
|
||||
assert indexifier.indexify(0) == {"key_1": 1, "key_2": "a"}
|
||||
assert indexifier.indexify(5) == {"key_1": 2, "key_2": "c"}
|
0
tests/results/__init__.py
Normal file
0
tests/results/__init__.py
Normal file
28
tests/results/test_column_results.py
Normal file
28
tests/results/test_column_results.py
Normal file
@@ -0,0 +1,28 @@
|
||||
import deepdog.results
|
||||
|
||||
|
||||
def test_parse_groupdict():
|
||||
example_column_name = (
|
||||
"geom_-20_20_-10_10_0_5-orientation_free-dipole_count_100_success"
|
||||
)
|
||||
|
||||
parsed = deepdog.results._parse_bayesrun_column(example_column_name)
|
||||
expected = deepdog.results.BayesrunColumnParsed(
|
||||
{
|
||||
"xmin": "-20",
|
||||
"xmax": "20",
|
||||
"ymin": "-10",
|
||||
"ymax": "10",
|
||||
"zmin": "0",
|
||||
"zmax": "5",
|
||||
"orientation": "free",
|
||||
"avg_filled": "100",
|
||||
"field_name": "success",
|
||||
}
|
||||
)
|
||||
assert parsed == expected
|
||||
|
||||
|
||||
# def test_parse_no_match_column_name():
|
||||
# parsed = deepdog.results.parse_bayesrun_column("There's nothing here")
|
||||
# assert parsed is None
|
158
tests/test_bayes_run_with_ss.py
Normal file
158
tests/test_bayes_run_with_ss.py
Normal file
@@ -0,0 +1,158 @@
|
||||
import deepdog
|
||||
import logging
|
||||
import logging.config
|
||||
|
||||
import numpy.random
|
||||
|
||||
from pdme.model import (
|
||||
LogSpacedRandomCountMultipleDipoleFixedMagnitudeModel,
|
||||
LogSpacedRandomCountMultipleDipoleFixedMagnitudeXYModel,
|
||||
LogSpacedRandomCountMultipleDipoleFixedMagnitudeFixedOrientationModel,
|
||||
)
|
||||
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def fixed_z_model_func(
|
||||
xmin,
|
||||
xmax,
|
||||
ymin,
|
||||
ymax,
|
||||
zmin,
|
||||
zmax,
|
||||
wexp_min,
|
||||
wexp_max,
|
||||
pfixed,
|
||||
n_max,
|
||||
prob_occupancy,
|
||||
):
|
||||
return LogSpacedRandomCountMultipleDipoleFixedMagnitudeFixedOrientationModel(
|
||||
xmin,
|
||||
xmax,
|
||||
ymin,
|
||||
ymax,
|
||||
zmin,
|
||||
zmax,
|
||||
wexp_min,
|
||||
wexp_max,
|
||||
pfixed,
|
||||
0,
|
||||
0,
|
||||
n_max,
|
||||
prob_occupancy,
|
||||
)
|
||||
|
||||
|
||||
def get_model(orientation):
|
||||
model_funcs = {
|
||||
"fixedz": fixed_z_model_func,
|
||||
"free": LogSpacedRandomCountMultipleDipoleFixedMagnitudeModel,
|
||||
"fixedxy": LogSpacedRandomCountMultipleDipoleFixedMagnitudeXYModel,
|
||||
}
|
||||
model = model_funcs[orientation](
|
||||
-10,
|
||||
10,
|
||||
-17.5,
|
||||
17.5,
|
||||
5,
|
||||
7.5,
|
||||
-5,
|
||||
6.5,
|
||||
10**3,
|
||||
2,
|
||||
0.99999999,
|
||||
)
|
||||
model.n = 2
|
||||
model.rng = numpy.random.default_rng(1234)
|
||||
|
||||
return (
|
||||
f"connors_geom-5height-orientation_{orientation}-pfixexp_{3}-dipole_count_{2}",
|
||||
model,
|
||||
)
|
||||
|
||||
|
||||
def test_basic_analysis(snapshot):
|
||||
|
||||
dot_positions = [[0, 0, 0], [0, 1, 0]]
|
||||
|
||||
freqs = [1, 10, 100]
|
||||
models = []
|
||||
|
||||
orientations = ["free", "fixedxy", "fixedz"]
|
||||
for orientation in orientations:
|
||||
models.append(get_model(orientation))
|
||||
|
||||
_logger.info(f"have {len(models)} models to look at")
|
||||
if len(models) == 1:
|
||||
_logger.info(f"only one model, name: {models[0][0]}")
|
||||
|
||||
square_run = deepdog.BayesRunWithSubspaceSimulation(
|
||||
dot_positions,
|
||||
freqs,
|
||||
models,
|
||||
models[0][1],
|
||||
filename_slug="test",
|
||||
end_threshold=0.9,
|
||||
ss_n_c=5,
|
||||
ss_n_s=2,
|
||||
ss_m_max=10,
|
||||
ss_target_cost=150,
|
||||
ss_level_0_seed=200,
|
||||
ss_mcmc_seed=20,
|
||||
ss_use_adaptive_steps=True,
|
||||
ss_default_phi_step=0.01,
|
||||
ss_default_theta_step=0.01,
|
||||
ss_default_r_step=0.01,
|
||||
ss_default_w_log_step=0.01,
|
||||
ss_default_upper_w_log_step=4,
|
||||
ss_dump_last_generation=False,
|
||||
write_output_to_bayesruncsv=False,
|
||||
ss_initial_costs_chunk_size=1000,
|
||||
)
|
||||
result = square_run.go()
|
||||
|
||||
assert result == snapshot
|
||||
|
||||
|
||||
def test_bayesss_with_tighter_cost(snapshot):
|
||||
|
||||
dot_positions = [[0, 0, 0], [0, 1, 0]]
|
||||
|
||||
freqs = [1, 10, 100]
|
||||
models = []
|
||||
|
||||
orientations = ["free", "fixedxy", "fixedz"]
|
||||
for orientation in orientations:
|
||||
models.append(get_model(orientation))
|
||||
|
||||
_logger.info(f"have {len(models)} models to look at")
|
||||
if len(models) == 1:
|
||||
_logger.info(f"only one model, name: {models[0][0]}")
|
||||
|
||||
square_run = deepdog.BayesRunWithSubspaceSimulation(
|
||||
dot_positions,
|
||||
freqs,
|
||||
models,
|
||||
models[0][1],
|
||||
filename_slug="test",
|
||||
end_threshold=0.9,
|
||||
ss_n_c=5,
|
||||
ss_n_s=2,
|
||||
ss_m_max=10,
|
||||
ss_target_cost=1.5,
|
||||
ss_level_0_seed=200,
|
||||
ss_mcmc_seed=20,
|
||||
ss_use_adaptive_steps=True,
|
||||
ss_default_phi_step=0.01,
|
||||
ss_default_theta_step=0.01,
|
||||
ss_default_r_step=0.01,
|
||||
ss_default_w_log_step=0.01,
|
||||
ss_default_upper_w_log_step=4,
|
||||
ss_dump_last_generation=False,
|
||||
write_output_to_bayesruncsv=False,
|
||||
ss_initial_costs_chunk_size=1,
|
||||
)
|
||||
result = square_run.go()
|
||||
|
||||
assert result == snapshot
|
Reference in New Issue
Block a user