Compare commits
46 Commits
Author | SHA1 | Date | |
---|---|---|---|
266d6dd583
|
|||
c573f8806d | |||
a015daf5ff
|
|||
a089951bbe
|
|||
7568aef842
|
|||
c4b6cbbb6f | |||
1cf4454153
|
|||
bf15f4a7b7
|
|||
12903b2540
|
|||
959b9af378
|
|||
8fd1b75e13
|
|||
17ae84879d
|
|||
fc2880ba2f
|
|||
589c16f25c
|
|||
743c3e22ae
|
|||
b3e2acd79c
|
|||
de1ec3e700
|
|||
f4964a19ea
|
|||
08d73c73e9 | |||
7ea1d715f6
|
|||
ed102799d1 | |||
0b8d14ef48 | |||
a5d0d257d7 | |||
6ee995e561 | |||
a217ad2c75
|
|||
039f68ee97
|
|||
e9dd21f69b
|
|||
8303fc7860 | |||
2418e3a263 | |||
73465203b2 | |||
01ba4af229 | |||
2c5c122820
|
|||
0a1a27759b
|
|||
558a4df643 | |||
6f141af0fe | |||
2c99fcf687
|
|||
ad0ace4da3
|
|||
3f1265e3ec
|
|||
969f01e9c5
|
|||
b282ffa800 | |||
91e9e5198e | |||
d7e0f13ca5
|
|||
74de2b0433
|
|||
c036028902
|
|||
690ad9e288 | |||
bd56f24774
|
4
.gitignore
vendored
4
.gitignore
vendored
@@ -114,6 +114,10 @@ ENV/
|
|||||||
env.bak/
|
env.bak/
|
||||||
venv.bak/
|
venv.bak/
|
||||||
|
|
||||||
|
# direnv
|
||||||
|
.envrc
|
||||||
|
.direnv
|
||||||
|
|
||||||
# Spyder project settings
|
# Spyder project settings
|
||||||
.spyderproject
|
.spyderproject
|
||||||
.spyproject
|
.spyproject
|
||||||
|
68
CHANGELOG.md
68
CHANGELOG.md
@@ -2,6 +2,74 @@
|
|||||||
|
|
||||||
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
|
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
|
||||||
|
|
||||||
|
## [0.7.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.7...0.7.0) (2023-05-01)
|
||||||
|
|
||||||
|
|
||||||
|
### ⚠ BREAKING CHANGES
|
||||||
|
|
||||||
|
* removes fastfilter parameter because it should never be needed
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds pair capability to real spectrum run hopefully ([a089951](https://gitea.deepak.science:2222/physics/deepdog/commit/a089951bbefcd8a0b2efeb49b7a8090412cbb23d))
|
||||||
|
* removes fastfilter parameter because it should never be needed ([a015daf](https://gitea.deepak.science:2222/physics/deepdog/commit/a015daf5ff6fa5f6155c8d7e02981b588840a5b0))
|
||||||
|
|
||||||
|
### [0.6.7](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.6...0.6.7) (2023-04-14)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds option to cap core count for real spectrum run ([bf15f4a](https://gitea.deepak.science:2222/physics/deepdog/commit/bf15f4a7b7f59504983624e7d512ed7474372032))
|
||||||
|
* adds option to cap core count for temp aware run ([12903b2](https://gitea.deepak.science:2222/physics/deepdog/commit/12903b2540cefb040174d230bc0d04719a6dc1b7))
|
||||||
|
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
* avoids redefinition of core count in loop ([1cf4454](https://gitea.deepak.science:2222/physics/deepdog/commit/1cf44541531541088198bd4599d467df3e1acbcf))
|
||||||
|
|
||||||
|
### [0.6.6](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.5...0.6.6) (2023-04-09)
|
||||||
|
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
* removes bad logging in multiprocessing function ([8fd1b75](https://gitea.deepak.science:2222/physics/deepdog/commit/8fd1b75e1378301210bfa8f14dd09174bbd21414))
|
||||||
|
|
||||||
|
### [0.6.5](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.4...0.6.5) (2023-04-09)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds temp aware guy using new pdme temp-flexible feature for bundling temp models ([de1ec3e](https://gitea.deepak.science:2222/physics/deepdog/commit/de1ec3e70062d418e0d4c89716905cc9313d2e26))
|
||||||
|
|
||||||
|
### [0.6.4](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.3...0.6.4) (2022-08-13)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* Prints model names while running ([7ea1d71](https://gitea.deepak.science:2222/physics/deepdog/commit/7ea1d715f67e81c9fa841c5a62f1cc700ff7363d))
|
||||||
|
|
||||||
|
### [0.6.3](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.2...0.6.3) (2022-06-12)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds fast filter variant ([2c5c122](https://gitea.deepak.science:2222/physics/deepdog/commit/2c5c1228209e51d17253f07470e2f1e6dc6872d7))
|
||||||
|
* adds tester for fast filter real spectrum ([0a1a277](https://gitea.deepak.science:2222/physics/deepdog/commit/0a1a27759b0d4ab01da214b76ab14bf2b1fe00e3))
|
||||||
|
|
||||||
|
### [0.6.2](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.1...0.6.2) (2022-05-26)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds better import api for real data run ([d7e0f13](https://gitea.deepak.science:2222/physics/deepdog/commit/d7e0f13ca55197b24cb534c80f321ee76b9c4a40))
|
||||||
|
|
||||||
|
### [0.6.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.6.0...0.6.1) (2022-05-22)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds new runner for real spectra ([bd56f24](https://gitea.deepak.science:2222/physics/deepdog/commit/bd56f247748babb2ee1f2a1182d25aa968bff5a5))
|
||||||
|
|
||||||
## [0.6.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.5.0...0.6.0) (2022-05-22)
|
## [0.6.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.5.0...0.6.0) (2022-05-22)
|
||||||
|
|
||||||
|
|
||||||
|
20
Jenkinsfile
vendored
20
Jenkinsfile
vendored
@@ -4,7 +4,7 @@ pipeline {
|
|||||||
label 'deepdog' // all your pods will be named with this prefix, followed by a unique id
|
label 'deepdog' // all your pods will be named with this prefix, followed by a unique id
|
||||||
idleMinutes 5 // how long the pod will live after no jobs have run on it
|
idleMinutes 5 // how long the pod will live after no jobs have run on it
|
||||||
yamlFile 'jenkins/ci-agent-pod.yaml' // path to the pod definition relative to the root of our project
|
yamlFile 'jenkins/ci-agent-pod.yaml' // path to the pod definition relative to the root of our project
|
||||||
defaultContainer 'python' // define a default container if more than a few stages use it, will default to jnlp container
|
defaultContainer 'poetry' // define a default container if more than a few stages use it, will default to jnlp container
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -12,36 +12,30 @@ pipeline {
|
|||||||
parallelsAlwaysFailFast()
|
parallelsAlwaysFailFast()
|
||||||
}
|
}
|
||||||
|
|
||||||
environment {
|
|
||||||
POETRY_HOME="/opt/poetry"
|
|
||||||
POETRY_VERSION="1.1.12"
|
|
||||||
}
|
|
||||||
|
|
||||||
stages {
|
stages {
|
||||||
stage('Build') {
|
stage('Build') {
|
||||||
steps {
|
steps {
|
||||||
echo 'Building...'
|
echo 'Building...'
|
||||||
sh 'python --version'
|
sh 'python --version'
|
||||||
sh 'curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python'
|
sh 'poetry --version'
|
||||||
sh '${POETRY_HOME}/bin/poetry --version'
|
sh 'poetry install'
|
||||||
sh '${POETRY_HOME}/bin/poetry install'
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
stage('Test') {
|
stage('Test') {
|
||||||
parallel{
|
parallel{
|
||||||
stage('pytest') {
|
stage('pytest') {
|
||||||
steps {
|
steps {
|
||||||
sh '${POETRY_HOME}/bin/poetry run pytest'
|
sh 'poetry run pytest'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
stage('lint') {
|
stage('lint') {
|
||||||
steps {
|
steps {
|
||||||
sh '${POETRY_HOME}/bin/poetry run flake8 deepdog tests'
|
sh 'poetry run flake8 deepdog tests'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
stage('mypy') {
|
stage('mypy') {
|
||||||
steps {
|
steps {
|
||||||
sh '${POETRY_HOME}/bin/poetry run mypy deepdog'
|
sh 'poetry run mypy deepdog'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -57,7 +51,7 @@ pipeline {
|
|||||||
}
|
}
|
||||||
steps {
|
steps {
|
||||||
echo 'Deploying...'
|
echo 'Deploying...'
|
||||||
sh '${POETRY_HOME}/bin/poetry publish -u ${PYPI_USR} -p ${PYPI_PSW} --build'
|
sh 'poetry publish -u ${PYPI_USR} -p ${PYPI_PSW} --build'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -5,7 +5,7 @@
|
|||||||
[](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
|
[](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
The DiPole DiaGnostic tool.
|
The DiPole DiaGnostic tool.
|
||||||
|
|
||||||
|
@@ -2,6 +2,8 @@ import logging
|
|||||||
from deepdog.meta import __version__
|
from deepdog.meta import __version__
|
||||||
from deepdog.bayes_run import BayesRun
|
from deepdog.bayes_run import BayesRun
|
||||||
from deepdog.bayes_run_simulpairs import BayesRunSimulPairs
|
from deepdog.bayes_run_simulpairs import BayesRunSimulPairs
|
||||||
|
from deepdog.real_spectrum_run import RealSpectrumRun
|
||||||
|
from deepdog.temp_aware_real_spectrum_run import TempAwareRealSpectrumRun
|
||||||
|
|
||||||
|
|
||||||
def get_version():
|
def get_version():
|
||||||
@@ -12,6 +14,8 @@ __all__ = [
|
|||||||
"get_version",
|
"get_version",
|
||||||
"BayesRun",
|
"BayesRun",
|
||||||
"BayesRunSimulPairs",
|
"BayesRunSimulPairs",
|
||||||
|
"RealSpectrumRun",
|
||||||
|
"TempAwareRealSpectrumRun",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
307
deepdog/real_spectrum_run.py
Normal file
307
deepdog/real_spectrum_run.py
Normal file
@@ -0,0 +1,307 @@
|
|||||||
|
import pdme.inputs
|
||||||
|
import pdme.model
|
||||||
|
import pdme.measurement
|
||||||
|
import pdme.measurement.input_types
|
||||||
|
import pdme.measurement.oscillating_dipole
|
||||||
|
import pdme.util.fast_v_calc
|
||||||
|
import pdme.util.fast_nonlocal_spectrum
|
||||||
|
from typing import Sequence, Tuple, List, Dict, Union, Optional
|
||||||
|
import datetime
|
||||||
|
import csv
|
||||||
|
import multiprocessing
|
||||||
|
import logging
|
||||||
|
import numpy
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: remove hardcode
|
||||||
|
CHUNKSIZE = 50
|
||||||
|
|
||||||
|
|
||||||
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def get_a_result_fast_filter_pairs(input) -> int:
|
||||||
|
(
|
||||||
|
model,
|
||||||
|
dot_inputs,
|
||||||
|
lows,
|
||||||
|
highs,
|
||||||
|
pair_inputs,
|
||||||
|
pair_lows,
|
||||||
|
pair_highs,
|
||||||
|
monte_carlo_count,
|
||||||
|
seed,
|
||||||
|
) = input
|
||||||
|
|
||||||
|
rng = numpy.random.default_rng(seed)
|
||||||
|
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
||||||
|
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||||
|
monte_carlo_count, None, rng_to_use=rng
|
||||||
|
)
|
||||||
|
|
||||||
|
current_sample = sample_dipoles
|
||||||
|
for di, low, high in zip(dot_inputs, lows, highs):
|
||||||
|
|
||||||
|
if len(current_sample) < 1:
|
||||||
|
break
|
||||||
|
vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
|
||||||
|
numpy.array([di]), current_sample
|
||||||
|
)
|
||||||
|
|
||||||
|
current_sample = current_sample[numpy.all((vals > low) & (vals < high), axis=1)]
|
||||||
|
|
||||||
|
for pi, plow, phigh in zip(pair_inputs, pair_lows, pair_highs):
|
||||||
|
if len(current_sample) < 1:
|
||||||
|
break
|
||||||
|
vals = pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
|
||||||
|
numpy.array([pi]), current_sample
|
||||||
|
)
|
||||||
|
|
||||||
|
current_sample = current_sample[
|
||||||
|
numpy.all(
|
||||||
|
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
|
||||||
|
axis=1,
|
||||||
|
)
|
||||||
|
]
|
||||||
|
return len(current_sample)
|
||||||
|
|
||||||
|
|
||||||
|
def get_a_result_fast_filter(input) -> int:
|
||||||
|
model, dot_inputs, lows, highs, monte_carlo_count, seed = input
|
||||||
|
|
||||||
|
rng = numpy.random.default_rng(seed)
|
||||||
|
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
||||||
|
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||||
|
monte_carlo_count, None, rng_to_use=rng
|
||||||
|
)
|
||||||
|
|
||||||
|
current_sample = sample_dipoles
|
||||||
|
for di, low, high in zip(dot_inputs, lows, highs):
|
||||||
|
|
||||||
|
if len(current_sample) < 1:
|
||||||
|
break
|
||||||
|
vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
|
||||||
|
numpy.array([di]), current_sample
|
||||||
|
)
|
||||||
|
|
||||||
|
current_sample = current_sample[numpy.all((vals > low) & (vals < high), axis=1)]
|
||||||
|
return len(current_sample)
|
||||||
|
|
||||||
|
|
||||||
|
class RealSpectrumRun:
|
||||||
|
"""
|
||||||
|
A bayes run given some real data.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
measurements : Sequence[pdme.measurement.DotRangeMeasurement]
|
||||||
|
The dot inputs for this bayes run.
|
||||||
|
|
||||||
|
models_with_names : Sequence[Tuple(str, pdme.model.DipoleModel)]
|
||||||
|
The models to evaluate.
|
||||||
|
|
||||||
|
actual_model : pdme.model.DipoleModel
|
||||||
|
The model which is actually correct.
|
||||||
|
|
||||||
|
filename_slug : str
|
||||||
|
The filename slug to include.
|
||||||
|
|
||||||
|
run_count: int
|
||||||
|
The number of runs to do.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
measurements: Sequence[pdme.measurement.DotRangeMeasurement],
|
||||||
|
models_with_names: Sequence[Tuple[str, pdme.model.DipoleModel]],
|
||||||
|
filename_slug: str,
|
||||||
|
monte_carlo_count: int = 10000,
|
||||||
|
monte_carlo_cycles: int = 10,
|
||||||
|
target_success: int = 100,
|
||||||
|
max_monte_carlo_cycles_steps: int = 10,
|
||||||
|
chunksize: int = CHUNKSIZE,
|
||||||
|
initial_seed: int = 12345,
|
||||||
|
cap_core_count: int = 0,
|
||||||
|
pair_measurements: Optional[
|
||||||
|
Sequence[pdme.measurement.DotPairRangeMeasurement]
|
||||||
|
] = None,
|
||||||
|
) -> None:
|
||||||
|
self.measurements = measurements
|
||||||
|
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||||
|
|
||||||
|
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
||||||
|
self.dot_inputs
|
||||||
|
)
|
||||||
|
|
||||||
|
if pair_measurements is not None:
|
||||||
|
self.pair_measurements = pair_measurements
|
||||||
|
self.use_pair_measurements = True
|
||||||
|
self.dot_pair_inputs = [
|
||||||
|
(measure.r1, measure.r2, measure.f)
|
||||||
|
for measure in self.pair_measurements
|
||||||
|
]
|
||||||
|
self.dot_pair_inputs_array = (
|
||||||
|
pdme.measurement.input_types.dot_pair_inputs_to_array(
|
||||||
|
self.dot_pair_inputs
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.use_pair_measurements = False
|
||||||
|
|
||||||
|
self.models = [model for (_, model) in models_with_names]
|
||||||
|
self.model_names = [name for (name, _) in models_with_names]
|
||||||
|
self.model_count = len(self.models)
|
||||||
|
|
||||||
|
self.monte_carlo_count = monte_carlo_count
|
||||||
|
self.monte_carlo_cycles = monte_carlo_cycles
|
||||||
|
self.target_success = target_success
|
||||||
|
self.max_monte_carlo_cycles_steps = max_monte_carlo_cycles_steps
|
||||||
|
|
||||||
|
self.csv_fields = []
|
||||||
|
|
||||||
|
self.compensate_zeros = True
|
||||||
|
self.chunksize = chunksize
|
||||||
|
for name in self.model_names:
|
||||||
|
self.csv_fields.extend([f"{name}_success", f"{name}_count", f"{name}_prob"])
|
||||||
|
|
||||||
|
# for now initialise priors as uniform.
|
||||||
|
self.probabilities = [1 / self.model_count] * self.model_count
|
||||||
|
|
||||||
|
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||||
|
|
||||||
|
ff_string = "fast_filter"
|
||||||
|
|
||||||
|
self.filename = f"{timestamp}-{filename_slug}.realdata.{ff_string}.bayesrun.csv"
|
||||||
|
self.initial_seed = initial_seed
|
||||||
|
|
||||||
|
self.cap_core_count = cap_core_count
|
||||||
|
|
||||||
|
def go(self) -> None:
|
||||||
|
with open(self.filename, "a", newline="") as outfile:
|
||||||
|
writer = csv.DictWriter(outfile, fieldnames=self.csv_fields, dialect="unix")
|
||||||
|
writer.writeheader()
|
||||||
|
|
||||||
|
(
|
||||||
|
lows,
|
||||||
|
highs,
|
||||||
|
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||||
|
self.measurements
|
||||||
|
)
|
||||||
|
|
||||||
|
pair_lows = None
|
||||||
|
pair_highs = None
|
||||||
|
if self.use_pair_measurements:
|
||||||
|
(
|
||||||
|
pair_lows,
|
||||||
|
pair_highs,
|
||||||
|
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||||
|
self.pair_measurements
|
||||||
|
)
|
||||||
|
|
||||||
|
# define a new seed sequence for each run
|
||||||
|
seed_sequence = numpy.random.SeedSequence(self.initial_seed)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
_logger.debug("Going to iterate over models now")
|
||||||
|
core_count = multiprocessing.cpu_count() - 1 or 1
|
||||||
|
if (self.cap_core_count >= 1) and (self.cap_core_count < core_count):
|
||||||
|
core_count = self.cap_core_count
|
||||||
|
_logger.info(f"Using {core_count} cores")
|
||||||
|
for model_count, (model, model_name) in enumerate(
|
||||||
|
zip(self.models, self.model_names)
|
||||||
|
):
|
||||||
|
_logger.debug(f"Doing model #{model_count}: {model_name}")
|
||||||
|
with multiprocessing.Pool(core_count) as pool:
|
||||||
|
cycle_count = 0
|
||||||
|
cycle_success = 0
|
||||||
|
cycles = 0
|
||||||
|
while (cycles < self.max_monte_carlo_cycles_steps) and (
|
||||||
|
cycle_success <= self.target_success
|
||||||
|
):
|
||||||
|
_logger.debug(f"Starting cycle {cycles}")
|
||||||
|
cycles += 1
|
||||||
|
current_success = 0
|
||||||
|
cycle_count += self.monte_carlo_count * self.monte_carlo_cycles
|
||||||
|
|
||||||
|
# generate a seed from the sequence for each core.
|
||||||
|
# note this needs to be inside the loop for monte carlo cycle steps!
|
||||||
|
# that way we get more stuff.
|
||||||
|
seeds = seed_sequence.spawn(self.monte_carlo_cycles)
|
||||||
|
|
||||||
|
if self.use_pair_measurements:
|
||||||
|
current_success = sum(
|
||||||
|
pool.imap_unordered(
|
||||||
|
get_a_result_fast_filter_pairs,
|
||||||
|
[
|
||||||
|
(
|
||||||
|
model,
|
||||||
|
self.dot_inputs_array,
|
||||||
|
lows,
|
||||||
|
highs,
|
||||||
|
self.dot_pair_inputs_array,
|
||||||
|
pair_lows,
|
||||||
|
pair_highs,
|
||||||
|
self.monte_carlo_count,
|
||||||
|
seed,
|
||||||
|
)
|
||||||
|
for seed in seeds
|
||||||
|
],
|
||||||
|
self.chunksize,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
|
||||||
|
current_success = sum(
|
||||||
|
pool.imap_unordered(
|
||||||
|
get_a_result_fast_filter,
|
||||||
|
[
|
||||||
|
(
|
||||||
|
model,
|
||||||
|
self.dot_inputs_array,
|
||||||
|
lows,
|
||||||
|
highs,
|
||||||
|
self.monte_carlo_count,
|
||||||
|
seed,
|
||||||
|
)
|
||||||
|
for seed in seeds
|
||||||
|
],
|
||||||
|
self.chunksize,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
cycle_success += current_success
|
||||||
|
_logger.debug(f"current running successes: {cycle_success}")
|
||||||
|
results.append((cycle_count, cycle_success))
|
||||||
|
|
||||||
|
_logger.debug("Done, constructing output now")
|
||||||
|
row: Dict[str, Union[int, float, str]] = {}
|
||||||
|
|
||||||
|
successes: List[float] = []
|
||||||
|
counts: List[int] = []
|
||||||
|
for model_index, (name, (count, result)) in enumerate(
|
||||||
|
zip(self.model_names, results)
|
||||||
|
):
|
||||||
|
|
||||||
|
row[f"{name}_success"] = result
|
||||||
|
row[f"{name}_count"] = count
|
||||||
|
successes.append(max(result, 0.5))
|
||||||
|
counts.append(count)
|
||||||
|
|
||||||
|
success_weight = sum(
|
||||||
|
[
|
||||||
|
(succ / count) * prob
|
||||||
|
for succ, count, prob in zip(successes, counts, self.probabilities)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
new_probabilities = [
|
||||||
|
(succ / count) * old_prob / success_weight
|
||||||
|
for succ, count, old_prob in zip(successes, counts, self.probabilities)
|
||||||
|
]
|
||||||
|
self.probabilities = new_probabilities
|
||||||
|
for name, probability in zip(self.model_names, self.probabilities):
|
||||||
|
row[f"{name}_prob"] = probability
|
||||||
|
_logger.info(row)
|
||||||
|
|
||||||
|
with open(self.filename, "a", newline="") as outfile:
|
||||||
|
writer = csv.DictWriter(outfile, fieldnames=self.csv_fields, dialect="unix")
|
||||||
|
writer.writerow(row)
|
231
deepdog/temp_aware_real_spectrum_run.py
Normal file
231
deepdog/temp_aware_real_spectrum_run.py
Normal file
@@ -0,0 +1,231 @@
|
|||||||
|
import pdme.inputs
|
||||||
|
import pdme.model
|
||||||
|
import pdme.measurement
|
||||||
|
import pdme.measurement.input_types
|
||||||
|
import pdme.measurement.oscillating_dipole
|
||||||
|
import pdme.util.fast_v_calc
|
||||||
|
import pdme.util.fast_nonlocal_spectrum
|
||||||
|
from typing import Sequence, Tuple, List, Dict, Union, Mapping
|
||||||
|
import datetime
|
||||||
|
import csv
|
||||||
|
import multiprocessing
|
||||||
|
import logging
|
||||||
|
import numpy
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: remove hardcode
|
||||||
|
CHUNKSIZE = 50
|
||||||
|
|
||||||
|
|
||||||
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def get_a_result_fast_filter(input) -> int:
|
||||||
|
# (
|
||||||
|
# model,
|
||||||
|
# self.dot_inputs_array_dict,
|
||||||
|
# low_high_dict,
|
||||||
|
# self.monte_carlo_count,
|
||||||
|
# seed,
|
||||||
|
# )
|
||||||
|
model, dot_inputs_dict, low_high_dict, monte_carlo_count, seed = input
|
||||||
|
|
||||||
|
rng = numpy.random.default_rng(seed)
|
||||||
|
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
||||||
|
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||||
|
monte_carlo_count, None, rng_to_use=rng
|
||||||
|
)
|
||||||
|
|
||||||
|
current_sample = sample_dipoles
|
||||||
|
for temp in dot_inputs_dict.keys():
|
||||||
|
dot_inputs = dot_inputs_dict[temp]
|
||||||
|
lows, highs = low_high_dict[temp]
|
||||||
|
for di, low, high in zip(dot_inputs, lows, highs):
|
||||||
|
|
||||||
|
if len(current_sample) < 1:
|
||||||
|
break
|
||||||
|
vals = pdme.util.fast_v_calc.fast_vs_for_asymmetric_dipoleses(
|
||||||
|
numpy.array([di]), current_sample, temp
|
||||||
|
)
|
||||||
|
|
||||||
|
current_sample = current_sample[
|
||||||
|
numpy.all((vals > low) & (vals < high), axis=1)
|
||||||
|
]
|
||||||
|
return len(current_sample)
|
||||||
|
|
||||||
|
|
||||||
|
class TempAwareRealSpectrumRun:
|
||||||
|
"""
|
||||||
|
A bayes run given some real data, with potentially variable temperature.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
measurements_dict : Dict[float, Sequence[pdme.measurement.DotRangeMeasurement]]
|
||||||
|
The dot inputs for this bayes run, in a dictionary indexed by temperatures
|
||||||
|
|
||||||
|
models_with_names : models_with_names: Sequence[Tuple[str, pdme.model.DipoleModel]],
|
||||||
|
|
||||||
|
The models to evaluate.
|
||||||
|
|
||||||
|
actual_model : pdme.model.DipoleModel
|
||||||
|
The model which is actually correct.
|
||||||
|
|
||||||
|
filename_slug : str
|
||||||
|
The filename slug to include.
|
||||||
|
|
||||||
|
run_count: int
|
||||||
|
The number of runs to do.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
measurements_dict: Mapping[
|
||||||
|
float, Sequence[pdme.measurement.DotRangeMeasurement]
|
||||||
|
],
|
||||||
|
models_with_names: Sequence[Tuple[str, pdme.model.DipoleModel]],
|
||||||
|
filename_slug: str,
|
||||||
|
monte_carlo_count: int = 10000,
|
||||||
|
monte_carlo_cycles: int = 10,
|
||||||
|
target_success: int = 100,
|
||||||
|
max_monte_carlo_cycles_steps: int = 10,
|
||||||
|
chunksize: int = CHUNKSIZE,
|
||||||
|
initial_seed: int = 12345,
|
||||||
|
cap_core_count: int = 0,
|
||||||
|
) -> None:
|
||||||
|
self.measurements_dict = measurements_dict
|
||||||
|
self.dot_inputs_dict = {
|
||||||
|
k: [(measure.r, measure.f) for measure in measurements]
|
||||||
|
for k, measurements in measurements_dict.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
self.dot_inputs_array_dict = {
|
||||||
|
k: pdme.measurement.input_types.dot_inputs_to_array(dot_inputs)
|
||||||
|
for k, dot_inputs in self.dot_inputs_dict.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
self.models = [model for (_, model) in models_with_names]
|
||||||
|
self.model_names = [name for (name, _) in models_with_names]
|
||||||
|
self.model_count = len(self.models)
|
||||||
|
|
||||||
|
self.monte_carlo_count = monte_carlo_count
|
||||||
|
self.monte_carlo_cycles = monte_carlo_cycles
|
||||||
|
self.target_success = target_success
|
||||||
|
self.max_monte_carlo_cycles_steps = max_monte_carlo_cycles_steps
|
||||||
|
|
||||||
|
self.csv_fields = []
|
||||||
|
|
||||||
|
self.compensate_zeros = True
|
||||||
|
self.chunksize = chunksize
|
||||||
|
for name in self.model_names:
|
||||||
|
self.csv_fields.extend([f"{name}_success", f"{name}_count", f"{name}_prob"])
|
||||||
|
|
||||||
|
# for now initialise priors as uniform.
|
||||||
|
self.probabilities = [1 / self.model_count] * self.model_count
|
||||||
|
|
||||||
|
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||||
|
ff_string = "fast_filter"
|
||||||
|
self.filename = f"{timestamp}-{filename_slug}.realdata.{ff_string}.bayesrun.csv"
|
||||||
|
self.initial_seed = initial_seed
|
||||||
|
|
||||||
|
self.cap_core_count = cap_core_count
|
||||||
|
|
||||||
|
def go(self) -> None:
|
||||||
|
with open(self.filename, "a", newline="") as outfile:
|
||||||
|
writer = csv.DictWriter(outfile, fieldnames=self.csv_fields, dialect="unix")
|
||||||
|
writer.writeheader()
|
||||||
|
|
||||||
|
low_high_dict = {}
|
||||||
|
for temp, measurements in self.measurements_dict.items():
|
||||||
|
(
|
||||||
|
lows,
|
||||||
|
highs,
|
||||||
|
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||||
|
measurements
|
||||||
|
)
|
||||||
|
low_high_dict[temp] = (lows, highs)
|
||||||
|
|
||||||
|
# define a new seed sequence for each run
|
||||||
|
seed_sequence = numpy.random.SeedSequence(self.initial_seed)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
_logger.debug("Going to iterate over models now")
|
||||||
|
core_count = multiprocessing.cpu_count() - 1 or 1
|
||||||
|
if (self.cap_core_count >= 1) and (self.cap_core_count < core_count):
|
||||||
|
core_count = self.cap_core_count
|
||||||
|
_logger.info(f"Using {core_count} cores")
|
||||||
|
for model_count, (model, model_name) in enumerate(
|
||||||
|
zip(self.models, self.model_names)
|
||||||
|
):
|
||||||
|
_logger.debug(f"Doing model #{model_count}: {model_name}")
|
||||||
|
with multiprocessing.Pool(core_count) as pool:
|
||||||
|
cycle_count = 0
|
||||||
|
cycle_success = 0
|
||||||
|
cycles = 0
|
||||||
|
while (cycles < self.max_monte_carlo_cycles_steps) and (
|
||||||
|
cycle_success <= self.target_success
|
||||||
|
):
|
||||||
|
_logger.debug(f"Starting cycle {cycles}")
|
||||||
|
cycles += 1
|
||||||
|
current_success = 0
|
||||||
|
cycle_count += self.monte_carlo_count * self.monte_carlo_cycles
|
||||||
|
|
||||||
|
# generate a seed from the sequence for each core.
|
||||||
|
# note this needs to be inside the loop for monte carlo cycle steps!
|
||||||
|
# that way we get more stuff.
|
||||||
|
seeds = seed_sequence.spawn(self.monte_carlo_cycles)
|
||||||
|
|
||||||
|
result_func = get_a_result_fast_filter
|
||||||
|
|
||||||
|
current_success = sum(
|
||||||
|
pool.imap_unordered(
|
||||||
|
result_func,
|
||||||
|
[
|
||||||
|
(
|
||||||
|
model,
|
||||||
|
self.dot_inputs_array_dict,
|
||||||
|
low_high_dict,
|
||||||
|
self.monte_carlo_count,
|
||||||
|
seed,
|
||||||
|
)
|
||||||
|
for seed in seeds
|
||||||
|
],
|
||||||
|
self.chunksize,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
cycle_success += current_success
|
||||||
|
_logger.debug(f"current running successes: {cycle_success}")
|
||||||
|
results.append((cycle_count, cycle_success))
|
||||||
|
|
||||||
|
_logger.debug("Done, constructing output now")
|
||||||
|
row: Dict[str, Union[int, float, str]] = {}
|
||||||
|
|
||||||
|
successes: List[float] = []
|
||||||
|
counts: List[int] = []
|
||||||
|
for model_index, (name, (count, result)) in enumerate(
|
||||||
|
zip(self.model_names, results)
|
||||||
|
):
|
||||||
|
|
||||||
|
row[f"{name}_success"] = result
|
||||||
|
row[f"{name}_count"] = count
|
||||||
|
successes.append(max(result, 0.5))
|
||||||
|
counts.append(count)
|
||||||
|
|
||||||
|
success_weight = sum(
|
||||||
|
[
|
||||||
|
(succ / count) * prob
|
||||||
|
for succ, count, prob in zip(successes, counts, self.probabilities)
|
||||||
|
]
|
||||||
|
)
|
||||||
|
new_probabilities = [
|
||||||
|
(succ / count) * old_prob / success_weight
|
||||||
|
for succ, count, old_prob in zip(successes, counts, self.probabilities)
|
||||||
|
]
|
||||||
|
self.probabilities = new_probabilities
|
||||||
|
for name, probability in zip(self.model_names, self.probabilities):
|
||||||
|
row[f"{name}_prob"] = probability
|
||||||
|
_logger.info(row)
|
||||||
|
|
||||||
|
with open(self.filename, "a", newline="") as outfile:
|
||||||
|
writer = csv.DictWriter(outfile, fieldnames=self.csv_fields, dialect="unix")
|
||||||
|
writer.writerow(row)
|
95
flake.lock
generated
Normal file
95
flake.lock
generated
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
{
|
||||||
|
"nodes": {
|
||||||
|
"flake-utils": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1648297722,
|
||||||
|
"narHash": "sha256-W+qlPsiZd8F3XkzXOzAoR+mpFqzm3ekQkJNa+PIh1BQ=",
|
||||||
|
"owner": "numtide",
|
||||||
|
"repo": "flake-utils",
|
||||||
|
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "numtide",
|
||||||
|
"repo": "flake-utils",
|
||||||
|
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"flake-utils_2": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1653893745,
|
||||||
|
"narHash": "sha256-0jntwV3Z8//YwuOjzhV2sgJJPt+HY6KhU7VZUL0fKZQ=",
|
||||||
|
"owner": "numtide",
|
||||||
|
"repo": "flake-utils",
|
||||||
|
"rev": "1ed9fb1935d260de5fe1c2f7ee0ebaae17ed2fa1",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "numtide",
|
||||||
|
"repo": "flake-utils",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nixpkgs": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1655087213,
|
||||||
|
"narHash": "sha256-4R5oQ+OwGAAcXWYrxC4gFMTUSstGxaN8kN7e8hkum/8=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nixpkgs_2": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1655046959,
|
||||||
|
"narHash": "sha256-gxqHZKq1ReLDe6ZMJSbmSZlLY95DsVq5o6jQihhzvmw=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "07bf3d25ce1da3bee6703657e6a787a4c6cdcea9",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"poetry2nix": {
|
||||||
|
"inputs": {
|
||||||
|
"flake-utils": "flake-utils_2",
|
||||||
|
"nixpkgs": "nixpkgs_2"
|
||||||
|
},
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1654921554,
|
||||||
|
"narHash": "sha256-hkfMdQAHSwLWlg0sBVvgrQdIiBP45U1/ktmFpY4g2Mo=",
|
||||||
|
"owner": "nix-community",
|
||||||
|
"repo": "poetry2nix",
|
||||||
|
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "nix-community",
|
||||||
|
"repo": "poetry2nix",
|
||||||
|
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"root": {
|
||||||
|
"inputs": {
|
||||||
|
"flake-utils": "flake-utils",
|
||||||
|
"nixpkgs": "nixpkgs",
|
||||||
|
"poetry2nix": "poetry2nix"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"root": "root",
|
||||||
|
"version": 7
|
||||||
|
}
|
63
flake.nix
Normal file
63
flake.nix
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
{
|
||||||
|
description = "Application packaged using poetry2nix";
|
||||||
|
|
||||||
|
inputs.flake-utils.url = "github:numtide/flake-utils?rev=0f8662f1319ad6abf89b3380dd2722369fc51ade";
|
||||||
|
inputs.nixpkgs.url = "github:NixOS/nixpkgs?rev=37b6b161e536fddca54424cf80662bce735bdd1e";
|
||||||
|
inputs.poetry2nix.url = "github:nix-community/poetry2nix?rev=7b71679fa7df00e1678fc3f1d1d4f5f372341b63";
|
||||||
|
|
||||||
|
outputs = { self, nixpkgs, flake-utils, poetry2nix }:
|
||||||
|
{
|
||||||
|
# Nixpkgs overlay providing the application
|
||||||
|
overlay = nixpkgs.lib.composeManyExtensions [
|
||||||
|
poetry2nix.overlay
|
||||||
|
(final: prev: {
|
||||||
|
# The application
|
||||||
|
deepdog = prev.poetry2nix.mkPoetryApplication {
|
||||||
|
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
|
||||||
|
# …
|
||||||
|
# workaround https://github.com/nix-community/poetry2nix/issues/568
|
||||||
|
pdme = super.pdme.overridePythonAttrs (old: {
|
||||||
|
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
|
||||||
|
});
|
||||||
|
});
|
||||||
|
projectDir = ./.;
|
||||||
|
};
|
||||||
|
deepdogEnv = prev.poetry2nix.mkPoetryEnv {
|
||||||
|
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
|
||||||
|
# …
|
||||||
|
# workaround https://github.com/nix-community/poetry2nix/issues/568
|
||||||
|
pdme = super.pdme.overridePythonAttrs (old: {
|
||||||
|
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
|
||||||
|
});
|
||||||
|
});
|
||||||
|
projectDir = ./.;
|
||||||
|
};
|
||||||
|
})
|
||||||
|
];
|
||||||
|
} // (flake-utils.lib.eachDefaultSystem (system:
|
||||||
|
let
|
||||||
|
pkgs = import nixpkgs {
|
||||||
|
inherit system;
|
||||||
|
overlays = [ self.overlay ];
|
||||||
|
};
|
||||||
|
in
|
||||||
|
{
|
||||||
|
apps = {
|
||||||
|
deepdog = pkgs.deepdog;
|
||||||
|
};
|
||||||
|
|
||||||
|
defaultApp = pkgs.deepdog;
|
||||||
|
devShell = pkgs.mkShell {
|
||||||
|
buildInputs = [
|
||||||
|
pkgs.poetry
|
||||||
|
pkgs.deepdogEnv
|
||||||
|
pkgs.deepdog
|
||||||
|
];
|
||||||
|
shellHook = ''
|
||||||
|
export DO_NIX_CUSTOM=1
|
||||||
|
'';
|
||||||
|
packages = [ pkgs.nodejs-16_x ];
|
||||||
|
};
|
||||||
|
|
||||||
|
}));
|
||||||
|
}
|
@@ -1,9 +1,11 @@
|
|||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Pod
|
kind: Pod
|
||||||
spec:
|
spec:
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: regcreds
|
||||||
containers: # list of containers that you want present for your build, you can define a default container in the Jenkinsfile
|
containers: # list of containers that you want present for your build, you can define a default container in the Jenkinsfile
|
||||||
- name: python
|
- name: poetry
|
||||||
image: python:3.8
|
image: ghcr.io/dmallubhotla/poetry-image:1
|
||||||
command: ["tail", "-f", "/dev/null"] # this or any command that is bascially a noop is required, this is so that you don't overwrite the entrypoint of the base container
|
command: ["tail", "-f", "/dev/null"] # this or any command that is bascially a noop is required, this is so that you don't overwrite the entrypoint of the base container
|
||||||
imagePullPolicy: Always # use cache or pull image for agent
|
imagePullPolicy: Always # use cache or pull image for agent
|
||||||
resources: # limits the resources your build contaienr
|
resources: # limits the resources your build contaienr
|
||||||
|
782
poetry.lock
generated
782
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,18 +1,20 @@
|
|||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "deepdog"
|
name = "deepdog"
|
||||||
version = "0.6.0"
|
version = "0.7.0"
|
||||||
description = ""
|
description = ""
|
||||||
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
|
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
|
||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = "^3.8,<3.10"
|
python = "^3.8,<3.10"
|
||||||
pdme = "0.8.2"
|
pdme = "^0.8.6"
|
||||||
|
numpy = "1.22.3"
|
||||||
|
scipy = "1.10"
|
||||||
|
|
||||||
[tool.poetry.dev-dependencies]
|
[tool.poetry.dev-dependencies]
|
||||||
pytest = ">=6"
|
pytest = ">=6"
|
||||||
flake8 = "^4.0.1"
|
flake8 = "^4.0.1"
|
||||||
pytest-cov = "^3.0.0"
|
pytest-cov = "^3.0.0"
|
||||||
mypy = "^0.950"
|
mypy = "^0.971"
|
||||||
python-semantic-release = "^7.24.0"
|
python-semantic-release = "^7.24.0"
|
||||||
black = "^22.3.0"
|
black = "^22.3.0"
|
||||||
|
|
||||||
|
Reference in New Issue
Block a user