Compare commits

...

36 Commits

Author SHA1 Message Date
09edf014f6
chore(release): 1.3.0
All checks were successful
gitea-physics/tantri/pipeline/tag This commit looks good
gitea-physics/tantri/pipeline/head This commit looks good
2024-12-29 21:22:30 -06:00
9c3f5aa286
fix: fixes real spectrum issue (revise these files if problems later...)
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-12-29 21:20:19 -06:00
1a72ea3e19
Merge branch 'master' of ssh://gitea.deepak.science:2222/physics/tantri
Some checks failed
gitea-physics/tantri/pipeline/head There was a failure building this commit
2024-12-29 21:12:54 -06:00
0e52cd61ae
feat: adds real spectrum correction, which might cause problems with older stuff 2024-12-29 21:08:42 -06:00
435985b4ca
test: adds init file so that test coverage doesn't fail on nixos
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-12-30 01:14:33 +00:00
b8e21da9fe
chore(release): 1.2.0
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
gitea-physics/tantri/pipeline/tag This commit looks good
2024-09-04 14:28:53 -05:00
3680f2ae4b Merge pull request 'binning' (#1) from binning into master
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
Reviewed-on: #1
2024-09-04 19:26:12 +00:00
f31b6aa6b1
feat: adds output binning and revises cli arguments
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-09-04 14:08:24 -05:00
402f6bd275
test: log tests for summary stats in binning 2024-08-05 05:38:18 -05:00
a20f0c2069
feat: adding binning feature that handles summary statistics 2024-08-05 05:24:59 -05:00
07457ba0eb
commit current snapshot version 2024-08-05 05:24:33 -05:00
35d225743a
bins require two entries at least for stdev 2024-08-05 04:02:17 -05:00
bb7f87239f
initial binning commit
Some checks failed
gitea-physics/tantri/pipeline/head There was a failure building this commit
2024-08-05 04:00:25 -05:00
f91df4227d
feat: adds apsd cli command 2024-08-05 04:00:12 -05:00
22bb3d876f
feat: adds psd calculation code and utilities to handle averaging of periodograms
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-07-27 11:07:24 -05:00
586485e24b
build: stupid build stuff for dependencies 2024-07-27 10:59:42 -05:00
2c65e2a984
Merge remote-tracking branch 'refs/remotes/origin/master'
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-07-10 10:56:47 -05:00
4623af69e7
feat: allows use of new event-based method for time series, despite it being slow
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-07-01 03:11:26 +00:00
062dbf6216
git: adds local_scripts to gitignore 2024-07-01 03:10:59 +00:00
34e31569dc
feat: adds event based time series calculation
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-07-01 02:25:27 +00:00
76cfc70328
Merge branch 'ipython' 2024-06-30 20:01:08 +00:00
bbbbac0fd5
chore: typo fix 2024-06-30 19:56:13 +00:00
37bae42a7f
chore: fix capitalisation 2024-06-30 19:53:18 +00:00
cb47feb233
nix: updates flake 2024-06-30 19:50:30 +00:00
2bae8c0ad0
chore: adds ipython 2024-06-30 19:45:27 +00:00
4a6303bf47
chore: adds zsh completions 2024-06-29 11:19:30 -05:00
de92b1631b
git: gitignore stuff 2024-06-29 11:19:08 -05:00
5cd9004103
chore: version updates, matplotlib as dev dependency
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-05-27 23:45:04 -05:00
0b39889064
chore(release): 1.1.0
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
gitea-physics/tantri/pipeline/tag This commit looks good
2024-05-02 14:25:31 -05:00
986e42df4e
feat: adds supersampling to better approximate a realistic time series for high frequency noise
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-05-02 14:23:51 -05:00
26a5def71a
chore: more type things 2024-05-01 23:46:03 -05:00
ca371762ae
ref: refactoring types and stuff 2024-05-01 23:43:54 -05:00
f55bbe66ee
feat: allows calling cli commands from libraries with wrapped commands, removes tqdm
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
2024-04-29 14:42:31 -05:00
286e49f84a
doc: adds badges 2024-04-29 14:41:53 -05:00
330e6466cd
chore(release): 1.0.1
All checks were successful
gitea-physics/tantri/pipeline/head This commit looks good
gitea-physics/tantri/pipeline/tag This commit looks good
2024-04-22 15:18:20 -05:00
c468eec343
chore: fixes pyproject version 2024-04-22 15:15:40 -05:00
34 changed files with 4679 additions and 565 deletions

4
.gitignore vendored
View File

@ -144,3 +144,7 @@ cython_debug/
.csv
.idea
# just a local scripts dir for us to be able to run things in
local_scripts/

View File

@ -2,6 +2,40 @@
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
## [1.3.0](https://gitea.deepak.science:2222/physics/tantri/compare/1.2.0...1.3.0) (2024-12-30)
### Features
* adds real spectrum correction, which might cause problems with older stuff ([0e52cd6](https://gitea.deepak.science:2222/physics/tantri/commit/0e52cd61aecb0ccc51d800fffe7d98551c17ee2e))
### Bug Fixes
* fixes real spectrum issue (revise these files if problems later...) ([9c3f5aa](https://gitea.deepak.science:2222/physics/tantri/commit/9c3f5aa286ac9cd812fa9e753d94d7955c89851c))
## [1.2.0](https://gitea.deepak.science:2222/physics/tantri/compare/1.1.0...1.2.0) (2024-09-04)
### Features
* adding binning feature that handles summary statistics ([a20f0c2](https://gitea.deepak.science:2222/physics/tantri/commit/a20f0c2069ffc6d930aee2fa2c93c6ade701b7f8))
* adds apsd cli command ([f91df42](https://gitea.deepak.science:2222/physics/tantri/commit/f91df4227dc86b525b9a53162884f4bfb3a3944c))
* adds event based time series calculation ([34e3156](https://gitea.deepak.science:2222/physics/tantri/commit/34e31569dceb581fd31b0b4479ffddbd6e59b724))
* adds output binning and revises cli arguments ([f31b6aa](https://gitea.deepak.science:2222/physics/tantri/commit/f31b6aa6b15441cd6c65e2809165d2e9a668883e))
* adds psd calculation code and utilities to handle averaging of periodograms ([22bb3d8](https://gitea.deepak.science:2222/physics/tantri/commit/22bb3d876f8994dc81d93b8ff2b1f68e0d91d2b0))
* allows use of new event-based method for time series, despite it being slow ([4623af6](https://gitea.deepak.science:2222/physics/tantri/commit/4623af69e7971eb72fdd6509d28ca41d8125a906))
## [1.1.0](https://gitea.deepak.science:2222/physics/tantri/compare/1.0.1...1.1.0) (2024-05-02)
### Features
* adds supersampling to better approximate a realistic time series for high frequency noise ([986e42d](https://gitea.deepak.science:2222/physics/tantri/commit/986e42df4e1d27babbb73cb064caedc5833357ba))
* allows calling cli commands from libraries with wrapped commands, removes tqdm ([f55bbe6](https://gitea.deepak.science:2222/physics/tantri/commit/f55bbe66ee846215221a3361934cf365bf390860))
### [1.0.1](https://gitea.deepak.science:2222/physics/tantri/compare/1.0.0...1.0.1) (2024-04-22)
## 1.0.0 (2024-04-22)

View File

@ -1,4 +1,12 @@
# tantri - generating telegraph noise
# tantri - generating telegraph noise and other Dipole TLS calculations
[![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-green.svg?style=flat-square)](https://conventionalcommits.org)
[![PyPI](https://img.shields.io/pypi/v/tantri?style=flat-square)](https://pypi.org/project/tantri/)
[![Jenkins](https://img.shields.io/jenkins/build?jobUrl=https%3A%2F%2Fjenkins.deepak.science%2Fjob%2Fgitea-physics%2Fjob%2Ftantri%2Fjob%2Fmaster&style=flat-square)](https://jenkins.deepak.science/job/gitea-physics/job/tantri/job/master/)
![Jenkins tests](https://img.shields.io/jenkins/tests?compact_message&jobUrl=https%3A%2F%2Fjenkins.deepak.science%2Fjob%2Fgitea-physics%2Fjob%2Ftantri%2Fjob%2Fmaster%2F&style=flat-square)
![Jenkins Coverage](https://img.shields.io/jenkins/coverage/cobertura?jobUrl=https%3A%2F%2Fjenkins.deepak.science%2Fjob%2Fgitea-physics%2Fjob%2Ftantri%2Fjob%2Fmaster%2F&style=flat-square)
![Maintenance](https://img.shields.io/maintenance/yes/2024?style=flat-square)
Named as an oblique half-pun on both tantri as a form of generating, and tantri as a telegraph or wire.
@ -8,4 +16,4 @@ Build with `just`, preferred over `do.sh` I think.
## CLI
the json files for dots and dipoles have a specific format that ignores extraneous data but is sad about missing fields.
The json files for dots and dipoles have a specific format that ignores extraneous data but is sad about missing fields.

25
flake.lock generated
View File

@ -23,11 +23,11 @@
"systems": "systems_2"
},
"locked": {
"lastModified": 1705309234,
"narHash": "sha256-uNRRNRKmJyCRC/8y1RqBkqWBLM034y4qN7EprSdmgyA=",
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "1ef2e671c3b0c19053962c07dbda38332dcebf26",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
@ -59,15 +59,16 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1713733131,
"narHash": "sha256-wpksn+coLuupYNuvB0CVh6N2ekYWoHk5rgjE1jw5tpk=",
"lastModified": 1719468428,
"narHash": "sha256-vN5xJAZ4UGREEglh3lfbbkIj+MPEYMuqewMn4atZFaQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b4bcfd112787f0eb79bbfd8306302802ec885003",
"rev": "1e3deb3d8a86a870d925760db1a5adecc64d329d",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
@ -83,11 +84,11 @@
"treefmt-nix": "treefmt-nix"
},
"locked": {
"lastModified": 1708589824,
"narHash": "sha256-2GOiFTkvs5MtVF65sC78KNVxQSmsxtk0WmV1wJ9V2ck=",
"lastModified": 1719549552,
"narHash": "sha256-efvBV+45uQA6r7aov48H6MhvKp1QUIyIX5gh9oueUzs=",
"owner": "nix-community",
"repo": "poetry2nix",
"rev": "3c92540611f42d3fb2d0d084a6c694cd6544b609",
"rev": "4fd045cdb85f2a0173021a4717dc01d92d7ab2b2",
"type": "github"
},
"original": {
@ -155,11 +156,11 @@
]
},
"locked": {
"lastModified": 1708335038,
"narHash": "sha256-ETLZNFBVCabo7lJrpjD6cAbnE11eDOjaQnznmg/6hAE=",
"lastModified": 1718522839,
"narHash": "sha256-ULzoKzEaBOiLRtjeY3YoGFJMwWSKRYOic6VNw2UyTls=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "e504621290a1fd896631ddbc5e9c16f4366c9f65",
"rev": "68eb1dc333ce82d0ab0c0357363ea17c31ea1f81",
"type": "github"
},
"original": {

View File

@ -2,7 +2,7 @@
description = "Application packaged using poetry2nix";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.nixpkgs.url = "github:NixOS/nixpkgs";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
inputs.poetry2nixSrc = {
url = "github:nix-community/poetry2nix";
inputs.nixpkgs.follows = "nixpkgs";

View File

@ -65,3 +65,12 @@ release:
htmlcov:
poetry run pytest --cov-report=html
zsh_completions:
#!/usr/bin/env bash
set -euxo pipefail
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
eval "$(_TANTRI_COMPLETE=zsh_source tantri)"
else
echo "Nope only nix."
fi

167
poetry.lock generated
View File

@ -62,63 +62,63 @@ files = [
[[package]]
name = "coverage"
version = "7.4.4"
version = "7.6.0"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "coverage-7.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e0be5efd5127542ef31f165de269f77560d6cdef525fffa446de6f7e9186cfb2"},
{file = "coverage-7.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ccd341521be3d1b3daeb41960ae94a5e87abe2f46f17224ba5d6f2b8398016cf"},
{file = "coverage-7.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09fa497a8ab37784fbb20ab699c246053ac294d13fc7eb40ec007a5043ec91f8"},
{file = "coverage-7.4.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1a93009cb80730c9bca5d6d4665494b725b6e8e157c1cb7f2db5b4b122ea562"},
{file = "coverage-7.4.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:690db6517f09336559dc0b5f55342df62370a48f5469fabf502db2c6d1cffcd2"},
{file = "coverage-7.4.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:09c3255458533cb76ef55da8cc49ffab9e33f083739c8bd4f58e79fecfe288f7"},
{file = "coverage-7.4.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:8ce1415194b4a6bd0cdcc3a1dfbf58b63f910dcb7330fe15bdff542c56949f87"},
{file = "coverage-7.4.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:b91cbc4b195444e7e258ba27ac33769c41b94967919f10037e6355e998af255c"},
{file = "coverage-7.4.4-cp310-cp310-win32.whl", hash = "sha256:598825b51b81c808cb6f078dcb972f96af96b078faa47af7dfcdf282835baa8d"},
{file = "coverage-7.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:09ef9199ed6653989ebbcaacc9b62b514bb63ea2f90256e71fea3ed74bd8ff6f"},
{file = "coverage-7.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0f9f50e7ef2a71e2fae92774c99170eb8304e3fdf9c8c3c7ae9bab3e7229c5cf"},
{file = "coverage-7.4.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:623512f8ba53c422fcfb2ce68362c97945095b864cda94a92edbaf5994201083"},
{file = "coverage-7.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0513b9508b93da4e1716744ef6ebc507aff016ba115ffe8ecff744d1322a7b63"},
{file = "coverage-7.4.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40209e141059b9370a2657c9b15607815359ab3ef9918f0196b6fccce8d3230f"},
{file = "coverage-7.4.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a2b2b78c78293782fd3767d53e6474582f62443d0504b1554370bde86cc8227"},
{file = "coverage-7.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:73bfb9c09951125d06ee473bed216e2c3742f530fc5acc1383883125de76d9cd"},
{file = "coverage-7.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:1f384c3cc76aeedce208643697fb3e8437604b512255de6d18dae3f27655a384"},
{file = "coverage-7.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:54eb8d1bf7cacfbf2a3186019bcf01d11c666bd495ed18717162f7eb1e9dd00b"},
{file = "coverage-7.4.4-cp311-cp311-win32.whl", hash = "sha256:cac99918c7bba15302a2d81f0312c08054a3359eaa1929c7e4b26ebe41e9b286"},
{file = "coverage-7.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:b14706df8b2de49869ae03a5ccbc211f4041750cd4a66f698df89d44f4bd30ec"},
{file = "coverage-7.4.4-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:201bef2eea65e0e9c56343115ba3814e896afe6d36ffd37bab783261db430f76"},
{file = "coverage-7.4.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:41c9c5f3de16b903b610d09650e5e27adbfa7f500302718c9ffd1c12cf9d6818"},
{file = "coverage-7.4.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d898fe162d26929b5960e4e138651f7427048e72c853607f2b200909794ed978"},
{file = "coverage-7.4.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ea79bb50e805cd6ac058dfa3b5c8f6c040cb87fe83de10845857f5535d1db70"},
{file = "coverage-7.4.4-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce4b94265ca988c3f8e479e741693d143026632672e3ff924f25fab50518dd51"},
{file = "coverage-7.4.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:00838a35b882694afda09f85e469c96367daa3f3f2b097d846a7216993d37f4c"},
{file = "coverage-7.4.4-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:fdfafb32984684eb03c2d83e1e51f64f0906b11e64482df3c5db936ce3839d48"},
{file = "coverage-7.4.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:69eb372f7e2ece89f14751fbcbe470295d73ed41ecd37ca36ed2eb47512a6ab9"},
{file = "coverage-7.4.4-cp312-cp312-win32.whl", hash = "sha256:137eb07173141545e07403cca94ab625cc1cc6bc4c1e97b6e3846270e7e1fea0"},
{file = "coverage-7.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:d71eec7d83298f1af3326ce0ff1d0ea83c7cb98f72b577097f9083b20bdaf05e"},
{file = "coverage-7.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d5ae728ff3b5401cc320d792866987e7e7e880e6ebd24433b70a33b643bb0384"},
{file = "coverage-7.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:cc4f1358cb0c78edef3ed237ef2c86056206bb8d9140e73b6b89fbcfcbdd40e1"},
{file = "coverage-7.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8130a2aa2acb8788e0b56938786c33c7c98562697bf9f4c7d6e8e5e3a0501e4a"},
{file = "coverage-7.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cf271892d13e43bc2b51e6908ec9a6a5094a4df1d8af0bfc360088ee6c684409"},
{file = "coverage-7.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a4cdc86d54b5da0df6d3d3a2f0b710949286094c3a6700c21e9015932b81447e"},
{file = "coverage-7.4.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:ae71e7ddb7a413dd60052e90528f2f65270aad4b509563af6d03d53e979feafd"},
{file = "coverage-7.4.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:38dd60d7bf242c4ed5b38e094baf6401faa114fc09e9e6632374388a404f98e7"},
{file = "coverage-7.4.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:aa5b1c1bfc28384f1f53b69a023d789f72b2e0ab1b3787aae16992a7ca21056c"},
{file = "coverage-7.4.4-cp38-cp38-win32.whl", hash = "sha256:dfa8fe35a0bb90382837b238fff375de15f0dcdb9ae68ff85f7a63649c98527e"},
{file = "coverage-7.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:b2991665420a803495e0b90a79233c1433d6ed77ef282e8e152a324bbbc5e0c8"},
{file = "coverage-7.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3b799445b9f7ee8bf299cfaed6f5b226c0037b74886a4e11515e569b36fe310d"},
{file = "coverage-7.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b4d33f418f46362995f1e9d4f3a35a1b6322cb959c31d88ae56b0298e1c22357"},
{file = "coverage-7.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aadacf9a2f407a4688d700e4ebab33a7e2e408f2ca04dbf4aef17585389eff3e"},
{file = "coverage-7.4.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c95949560050d04d46b919301826525597f07b33beba6187d04fa64d47ac82e"},
{file = "coverage-7.4.4-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff7687ca3d7028d8a5f0ebae95a6e4827c5616b31a4ee1192bdfde697db110d4"},
{file = "coverage-7.4.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:5fc1de20b2d4a061b3df27ab9b7c7111e9a710f10dc2b84d33a4ab25065994ec"},
{file = "coverage-7.4.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c74880fc64d4958159fbd537a091d2a585448a8f8508bf248d72112723974cbd"},
{file = "coverage-7.4.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:742a76a12aa45b44d236815d282b03cfb1de3b4323f3e4ec933acfae08e54ade"},
{file = "coverage-7.4.4-cp39-cp39-win32.whl", hash = "sha256:d89d7b2974cae412400e88f35d86af72208e1ede1a541954af5d944a8ba46c57"},
{file = "coverage-7.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:9ca28a302acb19b6af89e90f33ee3e1906961f94b54ea37de6737b7ca9d8827c"},
{file = "coverage-7.4.4-pp38.pp39.pp310-none-any.whl", hash = "sha256:b2c5edc4ac10a7ef6605a966c58929ec6c1bd0917fb8c15cb3363f65aa40e677"},
{file = "coverage-7.4.4.tar.gz", hash = "sha256:c901df83d097649e257e803be22592aedfd5182f07b3cc87d640bbb9afd50f49"},
{file = "coverage-7.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:dff044f661f59dace805eedb4a7404c573b6ff0cdba4a524141bc63d7be5c7fd"},
{file = "coverage-7.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a8659fd33ee9e6ca03950cfdcdf271d645cf681609153f218826dd9805ab585c"},
{file = "coverage-7.6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7792f0ab20df8071d669d929c75c97fecfa6bcab82c10ee4adb91c7a54055463"},
{file = "coverage-7.6.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d4b3cd1ca7cd73d229487fa5caca9e4bc1f0bca96526b922d61053ea751fe791"},
{file = "coverage-7.6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e7e128f85c0b419907d1f38e616c4f1e9f1d1b37a7949f44df9a73d5da5cd53c"},
{file = "coverage-7.6.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a94925102c89247530ae1dab7dc02c690942566f22e189cbd53579b0693c0783"},
{file = "coverage-7.6.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:dcd070b5b585b50e6617e8972f3fbbee786afca71b1936ac06257f7e178f00f6"},
{file = "coverage-7.6.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:d50a252b23b9b4dfeefc1f663c568a221092cbaded20a05a11665d0dbec9b8fb"},
{file = "coverage-7.6.0-cp310-cp310-win32.whl", hash = "sha256:0e7b27d04131c46e6894f23a4ae186a6a2207209a05df5b6ad4caee6d54a222c"},
{file = "coverage-7.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:54dece71673b3187c86226c3ca793c5f891f9fc3d8aa183f2e3653da18566169"},
{file = "coverage-7.6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c7b525ab52ce18c57ae232ba6f7010297a87ced82a2383b1afd238849c1ff933"},
{file = "coverage-7.6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4bea27c4269234e06f621f3fac3925f56ff34bc14521484b8f66a580aacc2e7d"},
{file = "coverage-7.6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed8d1d1821ba5fc88d4a4f45387b65de52382fa3ef1f0115a4f7a20cdfab0e94"},
{file = "coverage-7.6.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:01c322ef2bbe15057bc4bf132b525b7e3f7206f071799eb8aa6ad1940bcf5fb1"},
{file = "coverage-7.6.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03cafe82c1b32b770a29fd6de923625ccac3185a54a5e66606da26d105f37dac"},
{file = "coverage-7.6.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:0d1b923fc4a40c5832be4f35a5dab0e5ff89cddf83bb4174499e02ea089daf57"},
{file = "coverage-7.6.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:4b03741e70fb811d1a9a1d75355cf391f274ed85847f4b78e35459899f57af4d"},
{file = "coverage-7.6.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a73d18625f6a8a1cbb11eadc1d03929f9510f4131879288e3f7922097a429f63"},
{file = "coverage-7.6.0-cp311-cp311-win32.whl", hash = "sha256:65fa405b837060db569a61ec368b74688f429b32fa47a8929a7a2f9b47183713"},
{file = "coverage-7.6.0-cp311-cp311-win_amd64.whl", hash = "sha256:6379688fb4cfa921ae349c76eb1a9ab26b65f32b03d46bb0eed841fd4cb6afb1"},
{file = "coverage-7.6.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f7db0b6ae1f96ae41afe626095149ecd1b212b424626175a6633c2999eaad45b"},
{file = "coverage-7.6.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:bbdf9a72403110a3bdae77948b8011f644571311c2fb35ee15f0f10a8fc082e8"},
{file = "coverage-7.6.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cc44bf0315268e253bf563f3560e6c004efe38f76db03a1558274a6e04bf5d5"},
{file = "coverage-7.6.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da8549d17489cd52f85a9829d0e1d91059359b3c54a26f28bec2c5d369524807"},
{file = "coverage-7.6.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0086cd4fc71b7d485ac93ca4239c8f75732c2ae3ba83f6be1c9be59d9e2c6382"},
{file = "coverage-7.6.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:1fad32ee9b27350687035cb5fdf9145bc9cf0a094a9577d43e909948ebcfa27b"},
{file = "coverage-7.6.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:044a0985a4f25b335882b0966625270a8d9db3d3409ddc49a4eb00b0ef5e8cee"},
{file = "coverage-7.6.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:76d5f82213aa78098b9b964ea89de4617e70e0d43e97900c2778a50856dac605"},
{file = "coverage-7.6.0-cp312-cp312-win32.whl", hash = "sha256:3c59105f8d58ce500f348c5b56163a4113a440dad6daa2294b5052a10db866da"},
{file = "coverage-7.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:ca5d79cfdae420a1d52bf177de4bc2289c321d6c961ae321503b2ca59c17ae67"},
{file = "coverage-7.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d39bd10f0ae453554798b125d2f39884290c480f56e8a02ba7a6ed552005243b"},
{file = "coverage-7.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:beb08e8508e53a568811016e59f3234d29c2583f6b6e28572f0954a6b4f7e03d"},
{file = "coverage-7.6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b2e16f4cd2bc4d88ba30ca2d3bbf2f21f00f382cf4e1ce3b1ddc96c634bc48ca"},
{file = "coverage-7.6.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6616d1c9bf1e3faea78711ee42a8b972367d82ceae233ec0ac61cc7fec09fa6b"},
{file = "coverage-7.6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ad4567d6c334c46046d1c4c20024de2a1c3abc626817ae21ae3da600f5779b44"},
{file = "coverage-7.6.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:d17c6a415d68cfe1091d3296ba5749d3d8696e42c37fca5d4860c5bf7b729f03"},
{file = "coverage-7.6.0-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:9146579352d7b5f6412735d0f203bbd8d00113a680b66565e205bc605ef81bc6"},
{file = "coverage-7.6.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:cdab02a0a941af190df8782aafc591ef3ad08824f97850b015c8c6a8b3877b0b"},
{file = "coverage-7.6.0-cp38-cp38-win32.whl", hash = "sha256:df423f351b162a702c053d5dddc0fc0ef9a9e27ea3f449781ace5f906b664428"},
{file = "coverage-7.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:f2501d60d7497fd55e391f423f965bbe9e650e9ffc3c627d5f0ac516026000b8"},
{file = "coverage-7.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7221f9ac9dad9492cecab6f676b3eaf9185141539d5c9689d13fd6b0d7de840c"},
{file = "coverage-7.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ddaaa91bfc4477d2871442bbf30a125e8fe6b05da8a0015507bfbf4718228ab2"},
{file = "coverage-7.6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4cbe651f3904e28f3a55d6f371203049034b4ddbce65a54527a3f189ca3b390"},
{file = "coverage-7.6.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831b476d79408ab6ccfadaaf199906c833f02fdb32c9ab907b1d4aa0713cfa3b"},
{file = "coverage-7.6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:46c3d091059ad0b9c59d1034de74a7f36dcfa7f6d3bde782c49deb42438f2450"},
{file = "coverage-7.6.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:4d5fae0a22dc86259dee66f2cc6c1d3e490c4a1214d7daa2a93d07491c5c04b6"},
{file = "coverage-7.6.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:07ed352205574aad067482e53dd606926afebcb5590653121063fbf4e2175166"},
{file = "coverage-7.6.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:49c76cdfa13015c4560702574bad67f0e15ca5a2872c6a125f6327ead2b731dd"},
{file = "coverage-7.6.0-cp39-cp39-win32.whl", hash = "sha256:482855914928c8175735a2a59c8dc5806cf7d8f032e4820d52e845d1f731dca2"},
{file = "coverage-7.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:543ef9179bc55edfd895154a51792b01c017c87af0ebaae092720152e19e42ca"},
{file = "coverage-7.6.0-pp38.pp39.pp310-none-any.whl", hash = "sha256:6fe885135c8a479d3e37a7aae61cbd3a0fb2deccb4dda3c25f92a49189f766d6"},
{file = "coverage-7.6.0.tar.gz", hash = "sha256:289cc803fa1dc901f84701ac10c9ee873619320f2f9aff38794db4a4a0268d51"},
]
[package.dependencies]
@ -129,13 +129,13 @@ toml = ["tomli"]
[[package]]
name = "exceptiongroup"
version = "1.2.1"
version = "1.2.2"
description = "Backport of PEP 654 (exception groups)"
optional = false
python-versions = ">=3.7"
files = [
{file = "exceptiongroup-1.2.1-py3-none-any.whl", hash = "sha256:5258b9ed329c5bbdd31a309f53cbfb0b155341807f6ff7606a1e801a891b29ad"},
{file = "exceptiongroup-1.2.1.tar.gz", hash = "sha256:a4785e48b045528f5bfe627b6ad554ff32def154f42372786903b7abcfe1aa16"},
{file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"},
{file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"},
]
[package.extras]
@ -271,13 +271,13 @@ files = [
[[package]]
name = "packaging"
version = "24.0"
version = "24.1"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "packaging-24.0-py3-none-any.whl", hash = "sha256:2ddfb553fdf02fb784c234c7ba6ccc288296ceabec964ad2eae3777778130bc5"},
{file = "packaging-24.0.tar.gz", hash = "sha256:eb82c5e3e56209074766e6885bb04b8c38a0c015d0a30036ebe7ece34c9989e9"},
{file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"},
{file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"},
]
[[package]]
@ -293,18 +293,19 @@ files = [
[[package]]
name = "platformdirs"
version = "4.2.0"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
version = "4.2.2"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
optional = false
python-versions = ">=3.8"
files = [
{file = "platformdirs-4.2.0-py3-none-any.whl", hash = "sha256:0614df2a2f37e1a662acbd8e2b25b92ccf8632929bc6d43467e17fe89c75e068"},
{file = "platformdirs-4.2.0.tar.gz", hash = "sha256:ef0cc731df711022c174543cb70a9b5bd22e5a9337c8624ef2c2ceb8ddad8768"},
{file = "platformdirs-4.2.2-py3-none-any.whl", hash = "sha256:2d7a1657e36a80ea911db832a8a6ece5ee53d8de21edd5cc5879af6530b1bfee"},
{file = "platformdirs-4.2.2.tar.gz", hash = "sha256:38b7b51f512eed9e84a22788b4bce1de17c0adb134d6becb09836e37d8654cd3"},
]
[package.extras]
docs = ["furo (>=2023.9.10)", "proselint (>=0.13)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.25.2)"]
test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.4.3)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)"]
type = ["mypy (>=1.8)"]
[[package]]
name = "pluggy"
@ -345,13 +346,13 @@ files = [
[[package]]
name = "pytest"
version = "8.1.1"
version = "8.2.2"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "pytest-8.1.1-py3-none-any.whl", hash = "sha256:2a8386cfc11fa9d2c50ee7b2a57e7d898ef90470a7a34c4b949ff59662bb78b7"},
{file = "pytest-8.1.1.tar.gz", hash = "sha256:ac978141a75948948817d360297b7aae0fcb9d6ff6bc9ec6d514b85d5a65c044"},
{file = "pytest-8.2.2-py3-none-any.whl", hash = "sha256:c434598117762e2bd304e526244f67bf66bbd7b5d6cf22138be51ff661980343"},
{file = "pytest-8.2.2.tar.gz", hash = "sha256:de4bb8104e201939ccdc688b27a89a7be2079b22e2bd2b07f806b6ba71117977"},
]
[package.dependencies]
@ -359,11 +360,11 @@ colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=1.4,<2.0"
pluggy = ">=1.5,<2.0"
tomli = {version = ">=1", markers = "python_version < \"3.11\""}
[package.extras]
testing = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
[[package]]
name = "pytest-cov"
@ -446,38 +447,18 @@ files = [
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
[[package]]
name = "tqdm"
version = "4.66.2"
description = "Fast, Extensible Progress Meter"
optional = false
python-versions = ">=3.7"
files = [
{file = "tqdm-4.66.2-py3-none-any.whl", hash = "sha256:1ee4f8a893eb9bef51c6e35730cebf234d5d0b6bd112b0271e10ed7c24a02bd9"},
{file = "tqdm-4.66.2.tar.gz", hash = "sha256:6cd52cdf0fef0e0f543299cfc96fec90d7b8a7e88745f411ec33eb44d5ed3531"},
]
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["pytest (>=6)", "pytest-cov", "pytest-timeout", "pytest-xdist"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "typing-extensions"
version = "4.11.0"
version = "4.12.2"
description = "Backported and Experimental Type Hints for Python 3.8+"
optional = false
python-versions = ">=3.8"
files = [
{file = "typing_extensions-4.11.0-py3-none-any.whl", hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"},
{file = "typing_extensions-4.11.0.tar.gz", hash = "sha256:83f085bd5ca59c80295fc2a82ab5dac679cbe02b9f33f7d83af68e241bea51b0"},
{file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"},
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
]
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<3.10"
content-hash = "21aaa150d82e4d2583450110d1c16d878fb673a94e3c9eab7eb48a880618243a"
content-hash = "39b03182d5b5b2d09ef49c385ed2eb7b943dca3fdb435267143edca72fcb161e"

View File

@ -1,6 +1,6 @@
[tool.poetry]
name = "tantri"
version = "0.0.1"
version = "1.3.0"
description = "Python dipole model evaluator"
authors = ["Deepak <dmallubhotla+github@gmail.com>"]
license = "GPL-3.0-only"
@ -11,7 +11,6 @@ python = ">=3.8.1,<3.10"
numpy = "^1.22.3"
scipy = "~1.10"
click = "^8.1.7"
tqdm = "^4.66.2"
[tool.poetry.dev-dependencies]
pytest = ">=6"
@ -38,7 +37,9 @@ plugins = "numpy.typing.mypy_plugin"
[[tool.mypy.overrides]]
module = [
"scipy",
"scipy.optimize"
"scipy.optimize",
"scipy.stats",
"scipy.fft",
]
ignore_missing_imports = true

View File

@ -0,0 +1,7 @@
"""
Binning data.
"""
from tantri.binning.binning import bin_lists, BinConfig, BinSummary, BinSummaryValue
__all__ = ["bin_lists", "BinConfig", "BinSummary", "BinSummaryValue"]

135
tantri/binning/binning.py Normal file
View File

@ -0,0 +1,135 @@
import typing
import numpy
import logging
from dataclasses import dataclass
_logger = logging.getLogger(__name__)
@dataclass
class BinConfig:
log_scale: bool # true means that our bins of the x coordinate will be in
# if linear scale (not log_scale) then the semantics are
# min_x, min_x + bin_width, .... min_x + A * bin_width, max_x (and the last bin may not be evenly spaced)
# if log_scale then log(min_x), log(min_x) + bin_width, log(min_x) + 2 bin_width etc.
# (so essentially the units of bin_width depend on log_scale)
bin_width: float
# never log, will be logarithmed if needed
bin_min: typing.Optional[float] = None
# note that min_points_required must be >= 2
min_points_required: int = 2
def __post_init__(self):
if self.min_points_required < 2:
raise ValueError(
f"Can't compute summary statistics with bins of size < 2, so {self.min_points_required} is invalid"
)
@dataclass
class BinSummaryValue:
mean_y: float
stdev_y: float
def _summarise_values(ys: numpy.ndarray) -> BinSummaryValue:
mean_y = ys.mean(axis=0).item()
stdev_y = ys.std(axis=0, ddof=1).item()
return BinSummaryValue(mean_y, stdev_y)
@dataclass
class BinSummary:
mean_x: float
summary_values: typing.Dict[str, BinSummaryValue]
@dataclass
class Bin:
bindex: int # this is going to be very specific to a particular binning but hey let's include it
x_min: float
# points is a tuple of (freqs, value_dicts: Dict[str, numpy.ndarray])
# this conforms well to APSD result
point_xs: numpy.ndarray
point_y_dict: typing.Dict[str, numpy.ndarray]
def mean_point(self) -> typing.Tuple[float, typing.Dict[str, float]]:
mean_x = self.point_xs.mean(axis=0).item()
mean_y_dict = {k: v.mean(axis=0).item() for k, v in self.point_y_dict.items()}
return (mean_x, mean_y_dict)
def summary_point(self) -> BinSummary:
mean_x = self.point_xs.mean(axis=0).item()
summary_dict = {k: _summarise_values(v) for k, v in self.point_y_dict.items()}
return BinSummary(mean_x, summary_dict)
def _construct_bins(xs: numpy.ndarray, bin_config: BinConfig) -> numpy.ndarray:
min_x_raw = numpy.min(xs)
# if the bin config requested bin_min is None, then we can ignore it.
if bin_config.bin_min is not None:
_logger.debug(f"Received a desired bin_min={bin_config.bin_min}")
if bin_config.bin_min > min_x_raw:
raise ValueError(
f"The lowest x value of {xs=} was {min_x_raw=}, which is lower than the requested bin_min={bin_config.bin_min}"
)
else:
_logger.debug(f"Setting minimum to {bin_config.bin_min}")
min_x_raw = bin_config.bin_min
max_x_raw = numpy.max(xs)
if bin_config.log_scale:
min_x = numpy.log10(min_x_raw)
max_x = numpy.log10(max_x_raw)
else:
min_x = min_x_raw
max_x = max_x_raw
num_points = numpy.ceil(1 + (max_x - min_x) / bin_config.bin_width)
bins = min_x + (numpy.arange(0, num_points) * bin_config.bin_width)
if bin_config.log_scale:
return 10**bins
else:
return bins
def _populate_bins(
xs: numpy.ndarray, ys: typing.Dict[str, numpy.ndarray], bins: numpy.ndarray
) -> typing.Sequence[Bin]:
indexes = numpy.digitize(xs, bins) - 1
output_bins = []
seen = set()
for bindex in indexes:
if bindex not in seen:
seen.add(bindex)
matched_x = xs[indexes == bindex]
matched_output_dict = {k: v[indexes == bindex] for k, v in ys.items()}
output_bins.append(
Bin(
bindex,
x_min=bins[bindex].item(),
point_xs=matched_x,
point_y_dict=matched_output_dict,
)
)
return output_bins
def bin_lists(
xs: numpy.ndarray, ys: typing.Dict[str, numpy.ndarray], bin_config: BinConfig
) -> typing.Sequence[Bin]:
bins = _construct_bins(xs, bin_config)
raw_bins = _populate_bins(xs, ys, bins)
return [
bin for bin in raw_bins if len(bin.point_xs) >= bin_config.min_points_required
]

View File

@ -4,11 +4,12 @@ import tantri
import numpy
import tantri.cli.input_files.write_dipoles
import tantri.cli.file_importer
import tantri.dipoles
import tantri.binning
import json
import tantri.dipoles.generation
import tantri.dipoles
import tantri.dipoles.event_time_series
import pathlib
import tqdm # type: ignore
_logger = logging.getLogger(__name__)
@ -93,11 +94,14 @@ def cli(log, log_file):
@click.option(
"output_file",
"-o",
type=click.File("w"),
type=click.Path(path_type=pathlib.Path),
help="The output file to write, in csv format",
required=True,
)
@click.option("--header-row/--no-header-row", default=False, help="Write a header row")
@click.option(
"--event-based/--no-event-based", default=False, help="Use new event-based method"
)
def write_time_series(
dipoles_file,
dots_file,
@ -107,10 +111,35 @@ def write_time_series(
time_series_rng_seed,
output_file,
header_row,
event_based,
):
"""
Generate a time series for the passed in parameters.
"""
_write_time_series(
dipoles_file,
dots_file,
measurement_type,
delta_t,
num_iterations,
time_series_rng_seed,
output_file,
header_row,
event_based,
)
def _write_time_series(
dipoles_file,
dots_file,
measurement_type,
delta_t,
num_iterations,
time_series_rng_seed,
output_file,
header_row,
new_method,
):
_logger.debug(
f"Received parameters [dipoles_file: {dipoles_file}] and [dots_file: {dots_file}]"
)
@ -126,29 +155,238 @@ def write_time_series(
_logger.debug(f"Using measurement {measurement_enum.name}")
labels = [dot.label for dot in dots]
if header_row:
value_labels = ", ".join([f"{value_name}_{label}" for label in labels])
output_file.write(f"t (s), {value_labels}\n")
with output_file.open("w") as out:
if header_row:
value_labels = ", ".join([f"{value_name}_{label}" for label in labels])
out.write(f"t (s), {value_labels}\n")
_logger.debug(
f"Going to simulate {num_iterations} iterations with a delta t of {delta_t}"
_logger.debug(
f"Going to simulate {num_iterations} iterations with a delta t of {delta_t}"
)
if new_method:
_logger.info("Using new method")
_logger.debug(f"Got seed {time_series_rng_seed}")
if time_series_rng_seed is None:
time_series = tantri.dipoles.event_time_series.EventDipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t, num_iterations
)
else:
rng = numpy.random.default_rng(time_series_rng_seed)
time_series = tantri.dipoles.event_time_series.EventDipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t, num_iterations, rng
)
output_series = time_series.create_time_series()
for time, time_series_dict in output_series:
values = ", ".join(str(time_series_dict[label]) for label in labels)
out.write(f"{time}, {values}\n")
else:
# in the old method
_logger.debug(f"Got seed {time_series_rng_seed}")
if time_series_rng_seed is None:
time_series = tantri.dipoles.DipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t
)
else:
rng = numpy.random.default_rng(time_series_rng_seed)
time_series = tantri.dipoles.DipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t, rng
)
for i in range(num_iterations):
transition = time_series.transition()
transition_values = ", ".join(
str(transition[label]) for label in labels
)
out.write(f"{i * delta_t}, {transition_values}\n")
@cli.command()
@click.option(
"--dipoles-file",
default="dipoles.json",
show_default=True,
type=click.Path(exists=True, path_type=pathlib.Path),
help="File with json array of dipoles",
)
@click.option(
"--dots-file",
default="dots.json",
show_default=True,
type=click.Path(exists=True, path_type=pathlib.Path),
help="File with json array of dots",
)
@click.option(
"--measurement-type",
type=click.Choice([POTENTIAL, X_ELECTRIC_FIELD]),
default=POTENTIAL,
help="The type of measurement to simulate",
show_default=True,
)
@click.option(
"--delta-t",
"-t",
type=float,
default=1,
help="The delta t between time series iterations.",
show_default=True,
)
@click.option(
"--num-iterations",
# Note we're keeping this name to match write-time-series
"-n",
type=int,
default=10,
help="The number of time steps per time series, total time is num_iterations * delta_t.",
show_default=True,
)
@click.option(
"--num-time-series",
type=int,
default=20,
help="The number of simulated time series, which will be averaged over",
show_default=True,
)
@click.option(
"--time-series-rng-seed",
"-s",
type=int,
default=None,
help="A seed to use to create an override default rng. You should set this.",
)
@click.option(
"--output-file",
"-o",
type=click.Path(path_type=pathlib.Path),
help="The output file to write, in csv format",
required=True,
)
@click.option(
"--binned-output-file",
"-b",
type=click.Path(path_type=pathlib.Path),
help="Optional binned output file",
)
@click.option(
"--bin-widths",
type=float,
default=1,
show_default=True,
help="The default log(!) bin width, 1 means widths of a decade",
)
@click.option("--header-row/--no-header-row", default=False, help="Write a header row")
def write_apsd(
dipoles_file,
dots_file,
measurement_type,
delta_t,
num_iterations,
num_time_series,
time_series_rng_seed,
output_file,
binned_output_file,
bin_widths,
header_row,
):
"""
Generate an APSD for the passed in parameters, averaging over multiple (num_time_series) iterations.
"""
_write_apsd(
dipoles_file,
dots_file,
measurement_type,
delta_t,
num_iterations,
num_time_series,
time_series_rng_seed,
output_file,
binned_output_file,
bin_widths,
header_row,
)
_logger.debug(f"Got seed {time_series_rng_seed}")
if time_series_rng_seed is None:
time_series = tantri.dipoles.DipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t
)
else:
rng = numpy.random.default_rng(time_series_rng_seed)
time_series = tantri.dipoles.DipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t, rng
def _write_apsd(
dipoles_file,
dots_file,
measurement_type,
delta_t,
num_iterations,
num_time_series,
time_series_rng_seed,
output_file,
binned_output_file,
bin_widths,
header_row,
):
_logger.debug(
f"Received parameters [dipoles_file: {dipoles_file}] and [dots_file: {dots_file}]"
)
dipoles = tantri.cli.file_importer.read_dipoles_json_file(dipoles_file)
dots = tantri.cli.file_importer.read_dots_json_file(dots_file)
if measurement_type == POTENTIAL:
measurement_enum = tantri.dipoles.DipoleMeasurementType.ELECTRIC_POTENTIAL
value_name = "APSD_V"
elif measurement_type == X_ELECTRIC_FIELD:
measurement_enum = tantri.dipoles.DipoleMeasurementType.X_ELECTRIC_FIELD
value_name = "APSD_Ex"
_logger.debug(f"Using measurement {measurement_enum.name}")
labels = [dot.label for dot in dots]
with output_file.open("w") as out:
if header_row:
value_labels = ", ".join([f"{value_name}_{label}" for label in labels])
out.write(f"f (Hz), {value_labels}\n")
_logger.debug(
f"Going to simulate {num_iterations} iterations with a delta t of {delta_t}"
)
for i in tqdm.trange(num_iterations):
transition = time_series.transition()
transition_values = ", ".join(str(transition[label]) for label in labels)
output_file.write(f"{i * delta_t}, {transition_values}\n")
_logger.debug(f"Got seed {time_series_rng_seed}")
if time_series_rng_seed is None:
time_series = tantri.dipoles.DipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t
)
else:
rng = numpy.random.default_rng(time_series_rng_seed)
time_series = tantri.dipoles.DipoleTimeSeries(
dipoles, dots, measurement_enum, delta_t, rng
)
apsd = time_series.generate_average_apsd(
num_series=num_time_series, num_time_series_points=num_iterations
)
values_list = zip(*[apsd.psd_dict[label] for label in labels])
for freq, values in zip(apsd.freqs, values_list):
value_string = ", ".join(str(v) for v in values)
out.write(f"{freq}, {value_string}\n")
if binned_output_file is not None:
with binned_output_file.open("w") as out:
if header_row:
value_labels = ["mean bin f (Hz)"]
for label in labels:
value_labels.append(f"{value_name}_{label}_mean")
value_labels.append(f"{value_name}_{label}_stdev")
value_labels_text = ", ".join(value_labels)
out.write(value_labels_text + "\n")
binned = tantri.binning.bin_lists(
apsd.freqs,
apsd.psd_dict,
tantri.binning.BinConfig(
True, bin_width=bin_widths, bin_min=1e-6, min_points_required=2
),
)
for bin_result in binned:
summary = bin_result.summary_point()
out_list = [str(summary.mean_x)]
for label in labels:
out_list.append(str(summary.summary_values[label].mean_y))
out_list.append(str(summary.summary_values[label].stdev_y))
out_string = ", ".join(out_list) + "\n"
out.write(out_string)
@cli.command()
@ -158,12 +396,16 @@ def write_time_series(
)
@click.argument(
"output_file",
type=click.File("w"),
type=click.Path(path_type=pathlib.Path),
)
@click.option(
"--override-rng-seed", type=int, help="Seed to override the generation config spec."
)
def generate_dipoles(generation_config, output_file, override_rng_seed):
_generate_dipoles(generation_config, output_file, override_rng_seed)
def _generate_dipoles(generation_config, output_file, override_rng_seed):
"""Generate random dipoles as described by GENERATION_CONFIG and output to OUTPUT_FILE.
GENERATION_CONFIG should be a JSON file that matches the appropriate spec, and OUTPUT_FILE will contain JSON formatted contents.
@ -178,18 +420,19 @@ def generate_dipoles(generation_config, output_file, override_rng_seed):
with open(generation_config, "r") as config_file:
data = json.load(config_file)
config = tantri.dipoles.generation.DipoleGenerationConfig(**data)
config = tantri.dipoles.DipoleGenerationConfig(**data)
override_rng = None
if override_rng_seed is not None:
_logger.info(f"Overriding the rng with a new one with seed {override_rng_seed}")
override_rng = numpy.random.default_rng(override_rng_seed)
_logger.debug(f"generating dipoles with config {config}...")
generated = tantri.dipoles.generation.make_dipoles(config, override_rng)
generated = tantri.dipoles.make_dipoles(config, override_rng)
output_file.write(
json.dumps(
[g.as_dict() for g in generated],
cls=tantri.cli.input_files.write_dipoles.NumpyEncoder,
with output_file.open("w") as out:
out.write(
json.dumps(
[g.as_dict() for g in generated],
cls=tantri.cli.input_files.write_dipoles.NumpyEncoder,
)
)
)

View File

@ -1,150 +1,18 @@
from dataclasses import dataclass
import numpy
import numpy.random
import typing
from enum import Enum
from tantri.dipoles.types import DipoleTO
import logging
_logger = logging.getLogger(__name__)
class DipoleMeasurementType(Enum):
ELECTRIC_POTENTIAL = 1
X_ELECTRIC_FIELD = 2
@dataclass(frozen=True)
class DotPosition:
# assume len 3
r: numpy.ndarray
label: str
@dataclass
class WrappedDipole:
# assumed len 3
p: numpy.ndarray
s: numpy.ndarray
# should be 1/tau up to some pis
w: float
# For caching purposes tell each dipole where the dots are
# TODO: This can be done better by only passing into the time series the non-repeated p s and w,
# TODO: and then creating a new wrapper type to include all the cached stuff.
# TODO: Realistically, the dot positions and measurement type data should live in the time series.
dot_positions: typing.Sequence[DotPosition]
measurement_type: DipoleMeasurementType
def __post_init__(self) -> None:
"""
Coerce the inputs into numpy arrays.
"""
self.p = numpy.array(self.p)
self.s = numpy.array(self.s)
self.state = 1
self.cache = {}
for pos in self.dot_positions:
if self.measurement_type is DipoleMeasurementType.ELECTRIC_POTENTIAL:
self.cache[pos.label] = self.potential(pos)
elif self.measurement_type is DipoleMeasurementType.X_ELECTRIC_FIELD:
self.cache[pos.label] = self.e_field_x(pos)
def potential(self, dot: DotPosition) -> float:
# let's assume single dot at origin for now
r_diff = self.s - dot.r
return self.p.dot(r_diff) / (numpy.linalg.norm(r_diff) ** 3)
def e_field_x(self, dot: DotPosition) -> float:
# let's assume single dot at origin for now
r_diff = self.s - dot.r
norm = numpy.linalg.norm(r_diff)
return (
((3 * self.p.dot(r_diff) * r_diff / (norm**2)) - self.p) / (norm**3)
)[0]
def transition(
self, dt: float, rng_to_use: typing.Optional[numpy.random.Generator] = None
) -> typing.Dict[str, float]:
rng: numpy.random.Generator
if rng_to_use is None:
rng = numpy.random.default_rng()
else:
rng = rng_to_use
# if on average flipping often, then just return 0, basically this dipole has been all used up.
# Facilitates going for different types of noise at very low freq?
if dt * 10 >= 1 / self.w:
# _logger.warning(
# f"delta t {dt} is too long compared to dipole frequency {self.w}"
# )
self.state = rng.integers(0, 1, endpoint=True)
else:
prob = dt * self.w
if rng.random() < prob:
# _logger.debug("flip!")
self.flip_state()
return {k: self.state * v for k, v in self.cache.items()}
def flip_state(self):
self.state *= -1
def get_wrapped_dipoles(
dipole_tos: typing.Sequence[DipoleTO],
dots: typing.Sequence[DotPosition],
measurement_type: DipoleMeasurementType,
) -> typing.Sequence[WrappedDipole]:
return [
WrappedDipole(
p=dipole_to.p,
s=dipole_to.s,
w=dipole_to.w,
dot_positions=dots,
measurement_type=measurement_type,
)
for dipole_to in dipole_tos
]
class DipoleTimeSeries:
def __init__(
self,
dipoles: typing.Sequence[DipoleTO],
dots: typing.Sequence[DotPosition],
measurement_type: DipoleMeasurementType,
dt: float,
rng_to_use: typing.Optional[numpy.random.Generator] = None,
):
self.rng: numpy.random.Generator
if rng_to_use is None:
self.rng = numpy.random.default_rng()
else:
self.rng = rng_to_use
self.dipoles = get_wrapped_dipoles(dipoles, dots, measurement_type)
self.state = 0
self.dt = dt
def transition(self) -> typing.Dict[str, float]:
new_vals = [dipole.transition(self.dt, self.rng) for dipole in self.dipoles]
ret = {}
for transition in new_vals:
for k, v in transition.items():
if k not in ret:
ret[k] = v
else:
ret[k] += v
return ret
from tantri.dipoles.types import (
DipoleTO,
DotPosition,
DipoleMeasurementType,
DipoleGenerationConfig,
)
from tantri.dipoles.time_series import DipoleTimeSeries, WrappedDipole
from tantri.dipoles.generation import make_dipoles
__all__ = [
"WrappedDipole",
"DipoleTimeSeries",
"DipoleTO",
"DotPosition",
"DipoleMeasurementType",
"make_dipoles",
"DipoleGenerationConfig",
]

View File

@ -0,0 +1,154 @@
import numpy.random
from typing import Callable, Sequence, Tuple, Optional, Dict, List
from dataclasses import dataclass
from tantri.dipoles.types import DipoleTO, DotPosition, DipoleMeasurementType
import logging
_logger = logging.getLogger(__name__)
@dataclass
class EventWrappedDipole:
# assumed len 3
p: numpy.ndarray
s: numpy.ndarray
# should be 1/tau up to some pis
w: float
# For caching purposes tell each dipole where the dots are
# TODO: This can be done better by only passing into the time series the non-repeated p s and w,
# TODO: and then creating a new wrapper type to include all the cached stuff.
# TODO: Realistically, the dot positions and measurement type data should live in the time series.
dot_positions: Sequence[DotPosition]
measurement_type: DipoleMeasurementType
def __post_init__(self) -> None:
"""
Coerce the inputs into numpy arrays.
"""
self.p = numpy.array(self.p)
self.s = numpy.array(self.s)
self.state = 1
self.cache = {}
for pos in self.dot_positions:
if self.measurement_type is DipoleMeasurementType.ELECTRIC_POTENTIAL:
self.cache[pos.label] = self.potential(pos)
elif self.measurement_type is DipoleMeasurementType.X_ELECTRIC_FIELD:
self.cache[pos.label] = self.e_field_x(pos)
def potential(self, dot: DotPosition) -> float:
# let's assume single dot at origin for now
r_diff = self.s - dot.r
return self.p.dot(r_diff) / (numpy.linalg.norm(r_diff) ** 3)
def e_field_x(self, dot: DotPosition) -> float:
# let's assume single dot at origin for now
r_diff = self.s - dot.r
norm = numpy.linalg.norm(r_diff)
return (
((3 * self.p.dot(r_diff) * r_diff / (norm**2)) - self.p) / (norm**3)
)[0]
def get_time_series(
self, dt: float, num_samples: int, rng: numpy.random.Generator
) -> Sequence[Tuple[float, Dict[str, float]]]:
_logger.debug(
f"Creating time series with params {dt=}, {num_samples=}, scale={self.w}"
)
raw_time_series = create_exponential_time_series(rng, dt, num_samples, self.w)
output = []
for time, state in raw_time_series:
output.append((time, {k: state * v for k, v in self.cache.items()}))
return output
def get_event_wrapped_dipoles(
dipole_tos: Sequence[DipoleTO],
dots: Sequence[DotPosition],
measurement_type: DipoleMeasurementType,
) -> Sequence[EventWrappedDipole]:
return [
EventWrappedDipole(
p=dipole_to.p,
s=dipole_to.s,
w=dipole_to.w,
dot_positions=dots,
measurement_type=measurement_type,
)
for dipole_to in dipole_tos
]
class EventDipoleTimeSeries:
def __init__(
self,
dipoles: Sequence[DipoleTO],
dots: Sequence[DotPosition],
measurement_type: DipoleMeasurementType,
dt: float,
num_samples: int,
rng_to_use: Optional[numpy.random.Generator] = None,
):
self.rng: numpy.random.Generator
if rng_to_use is None:
self.rng = numpy.random.default_rng()
else:
self.rng = rng_to_use
self.dt = dt
self.num_samples = num_samples
self.dipoles = get_event_wrapped_dipoles(dipoles, dots, measurement_type)
def create_time_series(self) -> Sequence[Tuple[float, Dict[str, float]]]:
collected_dictionary: Dict[float, Dict[str, float]] = {}
_logger.debug("Creating time series")
for dipole in self.dipoles:
_logger.debug(f"Doing dipole {dipole}")
series = dipole.get_time_series(self.dt, self.num_samples, self.rng)
for time, meases in series:
if time in collected_dictionary:
for k, v in meases.items():
collected_dictionary[time][k] += v
else:
collected_dictionary[time] = meases
return [(k, v) for k, v in collected_dictionary.items()]
def get_num_events_before(
rng: numpy.random.Generator, scale: float, total_time: float
) -> Callable[[float], int]:
_logger.debug(
f"Creating the events before function for params {scale=} {total_time=}"
)
event_times: List = []
random_size = max(1, int(total_time // scale))
while sum(event_times) < total_time:
event_times.extend(rng.exponential(scale=scale, size=random_size))
accumulator = 0
scanned_times = [accumulator := accumulator + t for t in event_times]
def num_events_before(time: float) -> int:
return len([t for t in scanned_times if t < time])
return num_events_before
def create_exponential_time_series(
rng: numpy.random.Generator, dt: float, num_samples: int, scale: float
) -> Sequence[Tuple[float, int]]:
_logger.debug("Creating an exponential time series")
total_time = dt * num_samples
_logger.debug(f"Have a total time {total_time}")
events_before = get_num_events_before(rng, scale, total_time)
_logger.debug("Finished getting the events before function")
return [(dt * i, (events_before(dt * i) % 2) * 2 - 1) for i in range(num_samples)]

View File

@ -1,107 +1,5 @@
import numpy
from typing import Sequence, Optional
from dataclasses import dataclass, asdict
from tantri.dipoles.types import DipoleTO
from enum import Enum
import logging
from tantri.dipoles.generation.generate_dipole_config import make_dipoles
# stuff for generating random dipoles from parameters
_logger = logging.getLogger(__name__)
class Orientation(str, Enum):
# Enum for orientation, making string for json serialisation purposes
#
# Note that htis might not be infinitely extensible?
# https://stackoverflow.com/questions/75040733/is-there-a-way-to-use-strenum-in-earlier-python-versions
XY = "XY"
Z = "Z"
RANDOM = "RANDOM"
# A description of the parameters needed to generate random dipoles
@dataclass
class DipoleGenerationConfig:
# note no actual checks anywhere that these are sensibly defined with min less than max etc.
x_min: float
x_max: float
y_min: float
y_max: float
z_min: float
z_max: float
mag: float
# these are log_10 of actual value
w_log_min: float
w_log_max: float
orientation: Orientation
dipole_count: int
generation_seed: int
def __post_init__(self):
# This allows us to transparently set this with a string, while providing early warning of a type error
self.orientation = Orientation(self.orientation)
def as_dict(self) -> dict:
return_dict = asdict(self)
return_dict["orientation"] = return_dict["orientation"].value
return return_dict
def make_dipoles(
config: DipoleGenerationConfig,
rng_override: Optional[numpy.random.Generator] = None,
) -> Sequence[DipoleTO]:
if rng_override is None:
_logger.info(
f"Using the seed [{config.generation_seed}] provided by configuration for dipole generation"
)
rng = numpy.random.default_rng(config.generation_seed)
else:
_logger.info("Using overridden rng, of unknown seed")
rng = rng_override
dipoles = []
for i in range(config.dipole_count):
sx = rng.uniform(config.x_min, config.x_max)
sy = rng.uniform(config.y_min, config.y_max)
sz = rng.uniform(config.z_min, config.z_max)
# orientation
# 0, 1, 2
# xy, z, random
if config.orientation is Orientation.RANDOM:
theta = numpy.arccos(2 * rng.random() - 1)
phi = 2 * numpy.pi * rng.random()
elif config.orientation is Orientation.Z:
theta = 0
phi = 0
elif config.orientation is Orientation.XY:
theta = numpy.pi / 2
phi = 2 * numpy.pi * rng.random()
else:
raise ValueError(
f"this shouldn't have happened, orientation index: {config}"
)
px = config.mag * numpy.cos(phi) * numpy.sin(theta)
py = config.mag * numpy.sin(phi) * numpy.sin(theta)
pz = config.mag * numpy.cos(theta)
w = 10 ** rng.uniform(config.w_log_min, config.w_log_max)
dipoles.append(
DipoleTO(numpy.array([px, py, pz]), numpy.array([sx, sy, sz]), w)
)
return dipoles
__all__ = [
"make_dipoles",
]

View File

@ -0,0 +1,61 @@
import numpy
from typing import Sequence, Optional
from tantri.dipoles.types import DipoleTO, DipoleGenerationConfig, Orientation
import logging
# stuff for generating random dipoles from parameters
_logger = logging.getLogger(__name__)
def make_dipoles(
config: DipoleGenerationConfig,
rng_override: Optional[numpy.random.Generator] = None,
) -> Sequence[DipoleTO]:
if rng_override is None:
_logger.info(
f"Using the seed [{config.generation_seed}] provided by configuration for dipole generation"
)
rng = numpy.random.default_rng(config.generation_seed)
else:
_logger.info("Using overridden rng, of unknown seed")
rng = rng_override
dipoles = []
for i in range(config.dipole_count):
sx = rng.uniform(config.x_min, config.x_max)
sy = rng.uniform(config.y_min, config.y_max)
sz = rng.uniform(config.z_min, config.z_max)
# orientation
# 0, 1, 2
# xy, z, random
if config.orientation is Orientation.RANDOM:
theta = numpy.arccos(2 * rng.random() - 1)
phi = 2 * numpy.pi * rng.random()
elif config.orientation is Orientation.Z:
theta = 0
phi = 0
elif config.orientation is Orientation.XY:
theta = numpy.pi / 2
phi = 2 * numpy.pi * rng.random()
else:
raise ValueError(
f"this shouldn't have happened, orientation index: {config}"
)
px = config.mag * numpy.cos(phi) * numpy.sin(theta)
py = config.mag * numpy.sin(phi) * numpy.sin(theta)
pz = config.mag * numpy.cos(theta)
w = 10 ** rng.uniform(config.w_log_min, config.w_log_max)
dipoles.append(
DipoleTO(numpy.array([px, py, pz]), numpy.array([sx, sy, sz]), w)
)
return dipoles

View File

@ -0,0 +1,33 @@
import dataclasses
import logging
import math
_logger = logging.getLogger(__name__)
# how many times faster than the max frequency we want to be, bigger is more accurate but 10 is probably fine
DESIRED_THRESHOLD = 10
@dataclasses.dataclass
class SuperSample:
super_dt: float
super_sample_ratio: int
def get_supersample(max_frequency: float, dt: float) -> SuperSample:
# now we want to sample at least 10x faster than max_frequency, otherwise we're going to skew our statistics
# note that this is why if performance mattered we'd be optimising this to pre-gen our flip times with poisson statistics.
# so we want (1/dt) > 10 * max_freq
if DESIRED_THRESHOLD * dt * max_frequency < 1:
# can return unchanged
_logger.debug("no supersampling needed")
return SuperSample(super_dt=dt, super_sample_ratio=1)
else:
# else we want a such that a / dt > 10 * max_freq, or a > 10 * dt * max_freq, a = math.ceil(10 * dt * max_freq)
a = math.ceil(DESIRED_THRESHOLD * dt * max_frequency)
_logger.debug(
f"max_frequency {max_frequency} and delta_t {dt} needs a ratio of {a}"
)
ret_val = SuperSample(super_dt=dt / a, super_sample_ratio=a)
_logger.debug(ret_val)
return ret_val

View File

@ -0,0 +1,261 @@
from dataclasses import dataclass
import numpy
import numpy.random
import typing
from tantri.dipoles.types import DipoleTO, DotPosition, DipoleMeasurementType
import tantri.dipoles.supersample
import tantri.util
import scipy.stats
import scipy.fft
import logging
_logger = logging.getLogger(__name__)
@dataclass
class APSDResult:
psd_dict: typing.Dict[str, numpy.ndarray]
freqs: numpy.ndarray
@dataclass
class TimeSeriesResult:
series_dict: typing.Dict[str, numpy.ndarray]
num_points: int
delta_t: float
def get_time_points(self):
return [t * self.delta_t for t in range(self.num_points)]
def get_apsds(self) -> APSDResult:
def sq(a):
return numpy.real(a * numpy.conjugate(a))
def psd(v):
_logger.debug("Using real part correction and multiplying PSD by 2")
return 2 * sq(scipy.fft.rfft(v)[1:]) * self.delta_t / self.num_points
fft_dict = {k: psd(v) for k, v in self.series_dict.items()}
freqs = scipy.fft.rfftfreq(self.num_points, self.delta_t)[1:]
return APSDResult(fft_dict, freqs)
def average_apsds(apsds: typing.Sequence[APSDResult]) -> APSDResult:
def mean(list_of_arrays: typing.Sequence[numpy.ndarray]) -> numpy.ndarray:
return numpy.mean(numpy.array(list_of_arrays), axis=0)
if len(apsds) >= 1:
for subsequent in apsds[1:]:
if not numpy.array_equal(subsequent.freqs, apsds[0].freqs):
raise ValueError(
f"Could not average apsds, as {subsequent} does not match the frequencies in {apsds[0]}"
)
freqs = apsds[0].freqs
average_dict = tantri.util.dict_reduce([apsd.psd_dict for apsd in apsds], mean)
return APSDResult(average_dict, freqs)
@dataclass
class WrappedDipole:
# assumed len 3
p: numpy.ndarray
s: numpy.ndarray
# should be 1/tau up to some pis
w: float
# For caching purposes tell each dipole where the dots are
# TODO: This can be done better by only passing into the time series the non-repeated p s and w,
# TODO: and then creating a new wrapper type to include all the cached stuff.
# TODO: Realistically, the dot positions and measurement type data should live in the time series.
dot_positions: typing.Sequence[DotPosition]
measurement_type: DipoleMeasurementType
def __post_init__(self) -> None:
"""
Coerce the inputs into numpy arrays.
"""
self.p = numpy.array(self.p)
self.s = numpy.array(self.s)
self.state = 1
self.cache = {}
for pos in self.dot_positions:
if self.measurement_type is DipoleMeasurementType.ELECTRIC_POTENTIAL:
self.cache[pos.label] = self.potential(pos)
elif self.measurement_type is DipoleMeasurementType.X_ELECTRIC_FIELD:
self.cache[pos.label] = self.e_field_x(pos)
def potential(self, dot: DotPosition) -> float:
# let's assume single dot at origin for now
r_diff = self.s - dot.r
return self.p.dot(r_diff) / (numpy.linalg.norm(r_diff) ** 3)
def e_field_x(self, dot: DotPosition) -> float:
# let's assume single dot at origin for now
r_diff = self.s - dot.r
norm = numpy.linalg.norm(r_diff)
return (
((3 * self.p.dot(r_diff) * r_diff / (norm**2)) - self.p) / (norm**3)
)[0]
def transition(
self, dt: float, rng_to_use: typing.Optional[numpy.random.Generator] = None
) -> typing.Dict[str, float]:
rng: numpy.random.Generator
if rng_to_use is None:
rng = numpy.random.default_rng()
else:
rng = rng_to_use
# if on average flipping often, then just return 0, basically this dipole has been all used up.
# Facilitates going for different types of noise at very low freq?
if dt * 10 >= 1 / self.w:
# _logger.warning(
# f"delta t {dt} is too long compared to dipole frequency {self.w}"
# )
self.state = rng.integers(0, 1, endpoint=True)
else:
prob = dt * self.w
if rng.random() < prob:
# _logger.debug("flip!")
self.flip_state()
return {k: self.state * v for k, v in self.cache.items()}
def time_series(
self,
dt: float,
num_points: int,
rng_to_use: typing.Optional[numpy.random.Generator] = None,
) -> typing.Dict[str, numpy.ndarray]:
# don't forget to set rng
if rng_to_use is None:
rng = numpy.random.default_rng()
else:
rng = rng_to_use
# scale effective mu by the sample rate.
# mu (or w) has units of events/time, so effective rate is events/time * dt, giving mu per dt
eff_mu = dt * self.w
events = scipy.stats.poisson.rvs(eff_mu, size=num_points, random_state=rng)
telegraph_sequence = numpy.cumprod((-1) ** events)
return {k: telegraph_sequence * v for k, v in self.cache.items()}
def flip_state(self):
self.state *= -1
def get_wrapped_dipoles(
dipole_tos: typing.Sequence[DipoleTO],
dots: typing.Sequence[DotPosition],
measurement_type: DipoleMeasurementType,
) -> typing.Sequence[WrappedDipole]:
return [
WrappedDipole(
p=dipole_to.p,
s=dipole_to.s,
w=dipole_to.w,
dot_positions=dots,
measurement_type=measurement_type,
)
for dipole_to in dipole_tos
]
class DipoleTimeSeries:
def __init__(
self,
dipoles: typing.Sequence[DipoleTO],
dots: typing.Sequence[DotPosition],
measurement_type: DipoleMeasurementType,
dt: float,
rng_to_use: typing.Optional[numpy.random.Generator] = None,
):
self.rng: numpy.random.Generator
if rng_to_use is None:
self.rng = numpy.random.default_rng()
else:
self.rng = rng_to_use
self.dipoles = get_wrapped_dipoles(dipoles, dots, measurement_type)
self.state = 0
self.real_delta_t = dt
# we may need to supersample, because of how dumb this process is.
# let's find our highest frequency
max_frequency = max(d.w for d in self.dipoles)
super_sample = tantri.dipoles.supersample.get_supersample(max_frequency, dt)
self.dt = super_sample.super_dt
self.super_sample_ratio = super_sample.super_sample_ratio
def _sub_transition(self) -> typing.Dict[str, float]:
new_vals = [dipole.transition(self.dt, self.rng) for dipole in self.dipoles]
ret = {}
for transition in new_vals:
for k, v in transition.items():
if k not in ret:
ret[k] = v
else:
ret[k] += v
return ret
def transition(self) -> typing.Dict[str, float]:
return [self._sub_transition() for i in range(self.super_sample_ratio)][-1]
def _generate_series(
self, num_points: int, delta_t: float
) -> typing.Dict[str, numpy.ndarray]:
serieses = [
dipole.time_series(delta_t, num_points, self.rng) for dipole in self.dipoles
]
result = {}
for series in serieses:
for k, v in series.items():
if k not in result:
result[k] = v
else:
result[k] += v
return result
def generate_series(
self, num_points: int, override_delta_t: typing.Optional[float] = None
) -> TimeSeriesResult:
delta_t_to_use: float
if override_delta_t is not None:
delta_t_to_use = override_delta_t
else:
delta_t_to_use = self.real_delta_t
series = self._generate_series(num_points, delta_t_to_use)
return TimeSeriesResult(
series_dict=series, num_points=num_points, delta_t=delta_t_to_use
)
def generate_average_apsd(
self,
num_series: int,
num_time_series_points: int,
override_delta_t: typing.Optional[float] = None,
) -> APSDResult:
apsds = [
self.generate_series(num_time_series_points, override_delta_t).get_apsds()
for _ in range(num_series)
]
_logger.debug(f"Averaging {num_series} series")
return average_apsds(apsds)

View File

@ -1,5 +1,6 @@
import numpy
from dataclasses import dataclass, asdict
from enum import Enum
# Lazily just separating this from Dipole where there's additional cached stuff, this is just a thing
@ -15,3 +16,59 @@ class DipoleTO:
def as_dict(self) -> dict:
return asdict(self)
class Orientation(str, Enum):
# Enum for orientation, making string for json serialisation purposes
#
# Note that this might not be infinitely extensible?
# https://stackoverflow.com/questions/75040733/is-there-a-way-to-use-strenum-in-earlier-python-versions
XY = "XY"
Z = "Z"
RANDOM = "RANDOM"
# A description of the parameters needed to generate random dipoles
@dataclass
class DipoleGenerationConfig:
# note no actual checks anywhere that these are sensibly defined with min less than max etc.
x_min: float
x_max: float
y_min: float
y_max: float
z_min: float
z_max: float
mag: float
# these are log_10 of actual value
w_log_min: float
w_log_max: float
orientation: Orientation
dipole_count: int
generation_seed: int
def __post_init__(self):
# This allows us to transparently set this with a string, while providing early warning of a type error
self.orientation = Orientation(self.orientation)
def as_dict(self) -> dict:
return_dict = asdict(self)
return_dict["orientation"] = return_dict["orientation"].value
return return_dict
class DipoleMeasurementType(Enum):
ELECTRIC_POTENTIAL = 1
X_ELECTRIC_FIELD = 2
@dataclass(frozen=True)
class DotPosition:
# assume len 3
r: numpy.ndarray
label: str

21
tantri/util.py Normal file
View File

@ -0,0 +1,21 @@
import typing
A = typing.TypeVar("A")
B = typing.TypeVar("B")
def dict_reduce(
list_of_dicts: typing.Sequence[typing.Dict[str, A]],
func: typing.Callable[[typing.Sequence[A]], B],
) -> typing.Dict[str, B]:
"""
Reduce over list of dicts with function that can coalesce list of dict values.
Assumes the keys in the first dictionary are the same as the keys for every other passed in dictionary.
"""
keys = list_of_dicts[0].keys()
collated = {}
for key in keys:
collated[key] = [dct[key] for dct in list_of_dicts]
return {k: func(v) for k, v in collated.items()}

View File

View File

@ -0,0 +1,86 @@
# serializer version: 1
# name: test_group_x_bins
list([
Bin(bindex=0, x_min=1.0, point_xs=array([1. , 2.8, 8. ]), point_y_dict={'identity_plus_one': array([ 3. , 4.8, 10. ])}),
Bin(bindex=1, x_min=9.0, point_xs=array([12.2, 13.6]), point_y_dict={'identity_plus_one': array([14.2, 15.6])}),
Bin(bindex=2, x_min=17.0, point_xs=array([17. , 19.71, 20. , 24. ]), point_y_dict={'identity_plus_one': array([19. , 21.71, 22. , 26. ])}),
])
# ---
# name: test_group_x_bins_log
list([
Bin(bindex=0, x_min=0.0015848899999999994, point_xs=array([0.00158489, 0.00363078, 0.0398107 ]), point_y_dict={'basic_lorentzian': array([0.159154, 0.15915 , 0.158535])}),
Bin(bindex=1, x_min=0.15848899999999994, point_xs=array([ 0.275423, 0.524807, 2.51189 , 8.74984 , 10. ]), point_y_dict={'basic_lorentzian': array([0.134062 , 0.0947588 , 0.00960602, 0.00083808, 0.00064243])}),
])
# ---
# name: test_group_x_bins_mean
list([
tuple(
3.9333333333333336,
dict({
'identity_plus_one': 5.933333333333334,
}),
),
tuple(
12.899999999999999,
dict({
'identity_plus_one': 14.899999999999999,
}),
),
tuple(
20.177500000000002,
dict({
'identity_plus_one': 22.177500000000002,
}),
),
])
# ---
# name: test_group_x_bins_mean_log
list([
tuple(
0.0423015,
dict({
'basic_lorentzian': 0.15817799999999999,
}),
),
tuple(
0.593058,
dict({
'basic_lorentzian': 0.09491108333333335,
}),
),
tuple(
4.0870750000000005,
dict({
'basic_lorentzian': 0.004363105,
}),
),
tuple(
24.196866666666665,
dict({
'basic_lorentzian': 0.0001410066333333333,
}),
),
tuple(
394.723,
dict({
'basic_lorentzian': 1.364947e-06,
}),
),
])
# ---
# name: test_group_x_bins_summary
list([
BinSummary(mean_x=3.9333333333333336, summary_values={'identity_plus_one': BinSummaryValue(mean_y=5.933333333333334, stdev_y=3.635014901390823)}),
BinSummary(mean_x=12.899999999999999, summary_values={'identity_plus_one': BinSummaryValue(mean_y=14.899999999999999, stdev_y=0.9899494936611668)}),
BinSummary(mean_x=20.177500000000002, summary_values={'identity_plus_one': BinSummaryValue(mean_y=22.177500000000002, stdev_y=2.884329789280923)}),
])
# ---
# name: test_group_x_bins_summary_log
list([
BinSummary(mean_x=0.0423015, summary_values={'basic_lorentzian': BinSummaryValue(mean_y=0.15817799999999999, stdev_y=0.001275436787927965)}),
BinSummary(mean_x=0.593058, summary_values={'basic_lorentzian': BinSummaryValue(mean_y=0.09491108333333335, stdev_y=0.05205159393153745)}),
BinSummary(mean_x=4.0870750000000005, summary_values={'basic_lorentzian': BinSummaryValue(mean_y=0.004363105, stdev_y=0.0025964466030423193)}),
BinSummary(mean_x=24.196866666666665, summary_values={'basic_lorentzian': BinSummaryValue(mean_y=0.0001410066333333333, stdev_y=0.00010167601686387665)}),
BinSummary(mean_x=394.723, summary_values={'basic_lorentzian': BinSummaryValue(mean_y=1.364947e-06, stdev_y=1.7011900210905307e-06)}),
])
# ---

View File

@ -0,0 +1,314 @@
import pytest
import tantri.binning.binning as binning
import numpy
def test_bin_config_validation():
with pytest.raises(ValueError):
binning.BinConfig(log_scale=False, bin_width=1, min_points_required=1)
def test_bin_construction_faulty_min():
x_list = numpy.array([5, 6, 7, 8])
bin_config = binning.BinConfig(log_scale=False, bin_width=0.8, bin_min=5.5)
with pytest.raises(ValueError):
binning._construct_bins(x_list, bin_config)
def test_bin_construction_force_min():
x_list = numpy.array([4.5, 5.5, 6.5, 7.5, 8.5])
bin_config = binning.BinConfig(log_scale=False, bin_width=1, bin_min=2)
expected_bins = numpy.array([2, 3, 4, 5, 6, 7, 8, 9])
actual_bins = binning._construct_bins(x_list, bin_config=bin_config)
numpy.testing.assert_allclose(
actual_bins, expected_bins, err_msg="The bins were not as expected"
)
def test_bin_construction_even():
x_list = numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33])
bin_config = binning.BinConfig(log_scale=False, bin_width=8)
expected_bins = numpy.array([1, 9, 17, 25, 33])
actual_bins = binning._construct_bins(x_list, bin_config=bin_config)
numpy.testing.assert_allclose(
actual_bins, expected_bins, err_msg="The bins were not as expected"
)
def test_bin_construction_uneven():
x_list = numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33])
bin_config = binning.BinConfig(log_scale=False, bin_width=7)
expected_bins = numpy.array([1, 8, 15, 22, 29, 36])
actual_bins = binning._construct_bins(x_list, bin_config=bin_config)
numpy.testing.assert_allclose(
actual_bins, expected_bins, err_msg="The bins were not as expected"
)
def test_bin_construction_uneven_non_integer():
x_list = numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33])
bin_config = binning.BinConfig(log_scale=False, bin_width=7.5)
expected_bins = numpy.array([1, 8.5, 16, 23.5, 31, 38.5])
actual_bins = binning._construct_bins(x_list, bin_config=bin_config)
numpy.testing.assert_allclose(
actual_bins, expected_bins, err_msg="The bins were not as expected"
)
def test_group_x_bins(snapshot):
x_list = numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33])
y_dict = {
"identity_plus_one": (
numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33]) + 2
)
}
bin_config = binning.BinConfig(log_scale=False, bin_width=8)
# expected_bins = numpy.array([1, 9, 17, 25, 33])
binned = binning.bin_lists(x_list, y_dict, bin_config)
assert binned == snapshot
def test_group_x_bins_mean(snapshot):
x_list = numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33])
y_dict = {
"identity_plus_one": (
numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33]) + 2
)
}
bin_config = binning.BinConfig(log_scale=False, bin_width=8)
# expected_bins = numpy.array([1, 9, 17, 25, 33])
binned = binning.bin_lists(x_list, y_dict, bin_config)
mean_binned = [bin.mean_point() for bin in binned]
assert mean_binned == snapshot
def test_group_x_bins_summary(snapshot):
x_list = numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33])
y_dict = {
"identity_plus_one": (
numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33]) + 2
)
}
bin_config = binning.BinConfig(log_scale=False, bin_width=8)
# expected_bins = numpy.array([1, 9, 17, 25, 33])
binned = binning.bin_lists(x_list, y_dict, bin_config)
summary = [bin.summary_point() for bin in binned]
assert summary == snapshot
def test_bin_construction_faulty_min_log_scale():
x_list = numpy.array([5, 6, 7, 8])
bin_config = binning.BinConfig(log_scale=True, bin_width=0.8, bin_min=5.5)
with pytest.raises(ValueError):
binning._construct_bins(x_list, bin_config)
def test_bin_construction_force_min_log():
"""
This test shows the main use ofthe bin_min parameter, if we want our bins to nicely line up with decades for example,
then we can force it by ignoring the provided minimum x.
"""
x_list = numpy.array([1500, 5000, 10000, 33253, 400000])
bin_config = binning.BinConfig(log_scale=True, bin_width=1, bin_min=10)
expected_bins = numpy.array([10, 100, 1000, 10000, 100000, 1000000])
actual_bins = binning._construct_bins(x_list, bin_config=bin_config)
numpy.testing.assert_allclose(
actual_bins, expected_bins, err_msg="The bins were not as expected"
)
def test_bin_construction_even_log_scale():
x_list = numpy.array([1, 2.8, 8, 12.2, 13.6, 17, 19.71, 20, 24, 33])
# bin width of 0.3 corresponds to 10^0.3 ~= 2, so we're roughly looking at
bin_config = binning.BinConfig(log_scale=True, bin_width=0.3)
expected_bins = numpy.array(
[
1.00000000000,
1.99526231497,
3.98107170553,
7.94328234724,
15.8489319246,
31.6227766017,
63.0957344480,
]
)
actual_bins = binning._construct_bins(x_list, bin_config=bin_config)
numpy.testing.assert_allclose(
actual_bins, expected_bins, err_msg="The bins were not as expected"
)
def test_group_x_bins_log(snapshot):
x_list = numpy.array(
[
0.00158489,
0.00363078,
0.0398107,
0.275423,
0.524807,
2.51189,
8.74984,
10.0,
63.0957,
3981.07,
]
)
y_dict = {
"basic_lorentzian": numpy.array(
[
0.159154,
0.15915,
0.158535,
0.134062,
0.0947588,
0.00960602,
0.000838084,
0.000642427,
0.0000162008,
4.06987e-9,
]
)
}
bin_config = binning.BinConfig(log_scale=True, bin_width=2)
# expected_bins = numpy.array([1, 9, 17, 25, 33])
binned = binning.bin_lists(x_list, y_dict, bin_config)
assert binned == snapshot
def test_group_x_bins_mean_log(snapshot):
x_list = numpy.array(
[
0.0158489,
0.0316228,
0.0794328,
0.158489,
0.17378,
0.316228,
0.944061,
0.977237,
0.988553,
3.16228,
5.01187,
15.8489,
25.1189,
31.6228,
158.489,
630.957,
]
)
y_dict = {
"basic_lorentzian": (
numpy.array(
[
0.159056,
0.158763,
0.156715,
0.149866,
0.148118,
0.127657,
0.0497503,
0.0474191,
0.0466561,
0.00619907,
0.00252714,
0.000256378,
0.000102165,
0.0000644769,
2.56787e-6,
1.62024e-7,
]
)
)
}
bin_config = binning.BinConfig(log_scale=True, bin_width=1, bin_min=1e-2)
# expected_bins = numpy.array([1, 9, 17, 25, 33])
binned = binning.bin_lists(x_list, y_dict, bin_config)
mean_binned = [bin.mean_point() for bin in binned]
assert mean_binned == snapshot
def test_group_x_bins_summary_log(snapshot):
x_list = numpy.array(
[
0.0158489,
0.0316228,
0.0794328,
0.158489,
0.17378,
0.316228,
0.944061,
0.977237,
0.988553,
3.16228,
5.01187,
15.8489,
25.1189,
31.6228,
158.489,
630.957,
]
)
y_dict = {
"basic_lorentzian": (
numpy.array(
[
0.159056,
0.158763,
0.156715,
0.149866,
0.148118,
0.127657,
0.0497503,
0.0474191,
0.0466561,
0.00619907,
0.00252714,
0.000256378,
0.000102165,
0.0000644769,
2.56787e-6,
1.62024e-7,
]
)
)
}
bin_config = binning.BinConfig(log_scale=True, bin_width=1, bin_min=1e-2)
binned = binning.bin_lists(x_list, y_dict, bin_config)
summary = [bin.summary_point() for bin in binned]
assert summary == snapshot

View File

@ -5,121 +5,82 @@
'dot1': 1.1618574890412874,
}),
dict({
'dot1': 1.02607677019525,
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 1.1114564655272303,
}),
dict({
'dot1': 1.1114564655272303,
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 1.1618574890412874,
}),
dict({
'dot1': 1.1618574890412874,
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 1.1114564655272303,
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 1.076477793709307,
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 1.1114564655272303,
'dot1': -0.5407844066182117,
}),
dict({
'dot1': 1.076477793709307,
'dot1': -0.5407844066182117,
}),
dict({
'dot1': 1.1618574890412874,
'dot1': -0.4903833831041545,
}),
dict({
'dot1': 1.1618574890412874,
'dot1': -0.4903833831041545,
}),
dict({
'dot1': 1.02607677019525,
'dot1': -0.5407844066182117,
}),
dict({
'dot1': 1.1114564655272303,
'dot1': -0.4903833831041545,
}),
dict({
'dot1': 1.1618574890412874,
'dot1': -0.4903833831041545,
}),
dict({
'dot1': 1.02607677019525,
'dot1': -0.4903833831041545,
}),
dict({
'dot1': 1.02607677019525,
'dot1': -0.5407844066182117,
}),
dict({
'dot1': 1.076477793709307,
'dot1': -0.5407844066182117,
}),
dict({
'dot1': 1.1618574890412874,
}),
dict({
'dot1': 1.02607677019525,
}),
dict({
'dot1': 1.1114564655272303,
}),
dict({
'dot1': 1.1618574890412874,
}),
dict({
'dot1': 1.1618574890412874,
}),
dict({
'dot1': -0.40500368777217416,
}),
dict({
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.40500368777217416,
}),
dict({
'dot1': -0.45540471128623133,
}),
dict({
'dot1': -0.40500368777217416,
}),
dict({
'dot1': -0.45540471128623133,
}),
dict({
'dot1': -0.31962399244019385,
}),
dict({
'dot1': -0.45540471128623133,
}),
dict({
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.45540471128623133,
}),
dict({
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.40500368777217416,
}),
dict({
'dot1': -0.40500368777217416,
}),
dict({
'dot1': -0.40500368777217416,
}),
dict({
'dot1': -0.45540471128623133,
}),
dict({
'dot1': -0.40500368777217416,
'dot1': -0.4903833831041545,
}),
dict({
'dot1': -0.31962399244019385,
@ -128,28 +89,67 @@
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.45540471128623133,
'dot1': -0.31962399244019385,
}),
dict({
'dot1': -0.40500368777217416,
'dot1': -0.31962399244019385,
}),
dict({
'dot1': -0.40500368777217416,
'dot1': -0.31962399244019385,
}),
dict({
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.45540471128623133,
'dot1': -0.31962399244019385,
}),
dict({
'dot1': -0.40500368777217416,
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.40500368777217416,
'dot1': -0.31962399244019385,
}),
dict({
'dot1': -0.40500368777217416,
'dot1': -0.370025015954251,
}),
dict({
'dot1': -0.5407844066182117,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 0.9406970748632697,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': 0.9910980983773269,
}),
dict({
'dot1': -0.4903833831041545,
}),
dict({
'dot1': -0.5407844066182117,
}),
dict({
'dot1': -0.4903833831041545,
}),
dict({
'dot1': -0.4903833831041545,
}),
dict({
'dot1': -0.5407844066182117,
}),
dict({
'dot1': -0.5407844066182117,
}),
])
# ---
@ -159,121 +159,82 @@
'dot1': 0.4128225535745013,
}),
dict({
'dot1': 0.3931203265687062,
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.42682283788396164,
}),
dict({
'dot1': 0.42682283788396164,
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.4128225535745013,
}),
dict({
'dot1': 0.4128225535745013,
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.42682283788396164,
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.3791200422592459,
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.42682283788396164,
'dot1': -0.6282365057342035,
}),
dict({
'dot1': 0.3791200422592459,
'dot1': -0.6282365057342035,
}),
dict({
'dot1': 0.4128225535745013,
'dot1': -0.6422367900436639,
}),
dict({
'dot1': 0.4128225535745013,
'dot1': -0.6422367900436639,
}),
dict({
'dot1': 0.3931203265687062,
'dot1': -0.6282365057342035,
}),
dict({
'dot1': 0.42682283788396164,
'dot1': -0.6422367900436639,
}),
dict({
'dot1': 0.4128225535745013,
'dot1': -0.6422367900436639,
}),
dict({
'dot1': 0.3931203265687062,
'dot1': -0.6422367900436639,
}),
dict({
'dot1': 0.3931203265687062,
'dot1': -0.6282365057342035,
}),
dict({
'dot1': 0.3791200422592459,
'dot1': -0.6282365057342035,
}),
dict({
'dot1': 0.4128225535745013,
}),
dict({
'dot1': 0.3931203265687062,
}),
dict({
'dot1': 0.42682283788396164,
}),
dict({
'dot1': 0.4128225535745013,
}),
dict({
'dot1': 0.4128225535745013,
}),
dict({
'dot1': -0.6085342787284085,
}),
dict({
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.6085342787284085,
}),
dict({
'dot1': -0.5945339944189482,
}),
dict({
'dot1': -0.6085342787284085,
}),
dict({
'dot1': -0.5945339944189482,
}),
dict({
'dot1': -0.574831767413153,
}),
dict({
'dot1': -0.5945339944189482,
}),
dict({
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.5945339944189482,
}),
dict({
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.6085342787284085,
}),
dict({
'dot1': -0.6085342787284085,
}),
dict({
'dot1': -0.6085342787284085,
}),
dict({
'dot1': -0.5945339944189482,
}),
dict({
'dot1': -0.6085342787284085,
'dot1': -0.6422367900436639,
}),
dict({
'dot1': -0.574831767413153,
@ -282,28 +243,67 @@
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.5945339944189482,
'dot1': -0.574831767413153,
}),
dict({
'dot1': -0.6085342787284085,
'dot1': -0.574831767413153,
}),
dict({
'dot1': -0.6085342787284085,
'dot1': -0.574831767413153,
}),
dict({
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.5945339944189482,
'dot1': -0.574831767413153,
}),
dict({
'dot1': -0.6085342787284085,
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.6085342787284085,
'dot1': -0.574831767413153,
}),
dict({
'dot1': -0.6085342787284085,
'dot1': -0.5608314831036927,
}),
dict({
'dot1': -0.6282365057342035,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.35941781525345085,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': 0.3454175309439905,
}),
dict({
'dot1': -0.6422367900436639,
}),
dict({
'dot1': -0.6282365057342035,
}),
dict({
'dot1': -0.6422367900436639,
}),
dict({
'dot1': -0.6422367900436639,
}),
dict({
'dot1': -0.6282365057342035,
}),
dict({
'dot1': -0.6282365057342035,
}),
])
# ---

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,5 @@
from tantri.dipoles.generation import DipoleGenerationConfig, make_dipoles, Orientation
from tantri.dipoles.types import DipoleGenerationConfig, Orientation
from tantri.dipoles.generation import make_dipoles
import numpy

View File

@ -1,5 +1,5 @@
import json
from tantri.dipoles.generation import DipoleGenerationConfig, Orientation
from tantri.dipoles.types import DipoleGenerationConfig, Orientation
def test_serialise_generation_config_to_json(snapshot):

View File

@ -0,0 +1,120 @@
import tantri.dipoles.event_time_series as ets
import numpy.random
import logging
from tantri.dipoles.types import (
DipoleTO,
DotPosition,
DipoleMeasurementType,
)
_logger = logging.getLogger(__name__)
SEED = 12345
def test_event_time_series(snapshot):
_logger.info("testing event time series")
rng = numpy.random.default_rng(SEED)
before_func = ets.get_num_events_before(rng=rng, scale=0.5, total_time=10)
time_series = [(i, before_func(i)) for i in range(10)]
_logger.warning(time_series)
assert time_series == snapshot
def test_event_actual_series(snapshot):
rng = numpy.random.default_rng(SEED)
time_series = ets.create_exponential_time_series(rng, 0.1, 200, 0.5)
assert time_series == snapshot
def test_event_dipoles_collection(snapshot):
rng = numpy.random.default_rng(1234)
dots = [DotPosition(numpy.array([0, 0, 0]), "dot1")]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
d2 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([-2, 3, 2]),
0.1,
)
d3 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([2, 1, 2]),
6,
)
d4 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([-5, -5, 2]),
50,
)
ts_gen = ets.EventDipoleTimeSeries(
[d1, d2, d3, d4],
dots,
DipoleMeasurementType.X_ELECTRIC_FIELD,
0.01,
100,
rng_to_use=rng,
)
time_series = ts_gen.create_time_series()
assert time_series == snapshot
def test_event_dipoles_collection_larger(snapshot):
rng = numpy.random.default_rng(1234)
dots = [
DotPosition(numpy.array([0, 0, 0]), "dot1"),
DotPosition(numpy.array([0, -1, 0]), "dot2"),
]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
d2 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([-2, 3, 2]),
0.1,
)
d3 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([2, 1, 2]),
6,
)
d4 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([-5, -5, 2]),
50,
)
d5 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([-2, -2, 1]),
0.01,
)
ts_gen = ets.EventDipoleTimeSeries(
[d1, d2, d3, d4, d5],
dots,
DipoleMeasurementType.X_ELECTRIC_FIELD,
0.01,
100,
rng_to_use=rng,
)
time_series = ts_gen.create_time_series()
assert time_series == snapshot

View File

@ -0,0 +1,23 @@
from tantri.dipoles.supersample import get_supersample, SuperSample
def test_raw_supersample_slow():
# let's say our fluctuations are really slow
dt = 1
max_frequency = 0.0001
assert get_supersample(max_frequency, dt) == SuperSample(
super_dt=1, super_sample_ratio=1
)
def test_raw_supersample_fast():
# let's say we our fluctuations are fast
dt = 0.3
max_frequency = 10
# Our fastest oscillations will then happen every 0.1 s, so we need to drop down to something less than 0.01, so we divide by 30
assert get_supersample(max_frequency, dt) == SuperSample(
super_dt=0.01, super_sample_ratio=30
)

View File

@ -0,0 +1,173 @@
# serializer version: 1
# name: test_multiple_apsd
APSDResult(psd_dict={'dot1': array([9.73532062e-04, 8.39303246e-04, 9.61582732e-04, 8.40739221e-04,
9.37066683e-04, 1.03213670e-03, 9.56134787e-04, 9.93209101e-04,
1.08724022e-03, 1.21048571e-03, 9.38615917e-04, 8.52242772e-04,
8.42539642e-04, 1.01955480e-03, 8.97868104e-04, 1.00437469e-03,
7.07499008e-04, 8.18440147e-04, 9.24010511e-04, 8.24215010e-04,
7.12733009e-04, 8.52407925e-04, 7.34862496e-04, 7.63072581e-04,
6.73402818e-04, 7.85605329e-04, 6.30735299e-04, 6.84832870e-04,
6.55976134e-04, 8.52604367e-04, 6.57998050e-04, 8.58669644e-04,
6.31868625e-04, 6.65638673e-04, 6.31591692e-04, 5.74813110e-04,
5.69208218e-04, 6.19839130e-04, 6.65732425e-04, 5.62371802e-04,
5.76800794e-04, 5.65916924e-04, 6.56932756e-04, 6.14845472e-04,
4.99181492e-04, 5.89816624e-04, 5.65243915e-04, 4.67480729e-04,
4.92807307e-04, 3.94075520e-04, 4.17672841e-04, 4.48402297e-04,
3.64196723e-04, 4.36311422e-04, 3.49511887e-04, 4.19780757e-04,
3.77423406e-04, 4.14824457e-04, 3.97342480e-04, 3.68726505e-04,
4.10513044e-04, 4.12630532e-04, 3.29064095e-04, 3.46699091e-04,
3.44594847e-04, 3.50505438e-04, 3.77803485e-04, 2.87885784e-04,
2.93655837e-04, 3.77470181e-04, 3.23308066e-04, 3.20630183e-04,
3.22296166e-04, 3.45562066e-04, 2.59242080e-04, 3.25464512e-04,
2.69122248e-04, 2.74602169e-04, 2.63152125e-04, 2.46491048e-04,
2.24341046e-04, 2.68591099e-04, 2.65104078e-04, 2.60274323e-04,
2.33898312e-04, 2.43711653e-04, 2.41226390e-04, 2.17801406e-04,
2.17888671e-04, 2.01274978e-04, 1.93877002e-04, 1.96390771e-04,
1.87490996e-04, 2.04166379e-04, 2.08714128e-04, 2.29287658e-04,
1.69458646e-04, 1.95255536e-04, 1.74110621e-04, 1.96323415e-04,
1.88541448e-04, 1.81924188e-04, 1.70966036e-04, 1.57648084e-04,
1.60524745e-04, 1.92460519e-04, 1.53432849e-04, 1.55562203e-04,
1.61883665e-04, 1.79482805e-04, 1.65771247e-04, 1.99739063e-04,
1.58577129e-04, 1.47380404e-04, 1.33846897e-04, 1.66883453e-04,
1.51442563e-04, 1.67281574e-04, 1.37008732e-04, 1.55273374e-04,
1.33076567e-04, 1.56652477e-04, 1.55555437e-04, 1.27773146e-04,
1.40395843e-04, 1.16816771e-04, 1.16899846e-04, 1.38621871e-04,
1.24623109e-04, 1.24337619e-04, 1.17615455e-04, 1.12875623e-04,
1.07781236e-04, 1.01389801e-04, 1.15571668e-04, 1.01276406e-04,
1.22516329e-04, 1.17040309e-04, 1.34030708e-04, 1.08730317e-04,
1.18619015e-04, 9.95587258e-05, 1.00906508e-04, 9.29116526e-05,
1.00137183e-04, 9.28680987e-05, 8.01445843e-05, 1.05970263e-04,
9.45430569e-05, 8.00503584e-05, 1.01166198e-04, 9.32925898e-05,
9.08523306e-05, 8.30813940e-05, 8.57761431e-05, 8.50599551e-05,
9.42550643e-05, 7.06534307e-05, 9.38109626e-05, 7.97487294e-05,
7.87467189e-05, 8.60532142e-05, 8.01127568e-05, 6.55220855e-05,
1.10419369e-04, 7.63287410e-05, 9.62261196e-05, 7.80528169e-05,
7.12069198e-05, 8.57268166e-05, 6.88077364e-05, 6.17188837e-05,
7.17568191e-05, 7.89992143e-05, 7.71412885e-05, 7.68729059e-05,
7.75188516e-05, 7.56236336e-05, 6.84518115e-05, 5.78175904e-05,
7.72279483e-05, 6.45329149e-05, 7.66020128e-05, 6.71940726e-05,
7.73912298e-05, 6.37107478e-05, 6.45922648e-05, 6.80344334e-05,
5.48229973e-05, 5.60630527e-05, 6.52106583e-05, 6.47830130e-05,
6.56050943e-05, 7.13923758e-05, 6.90215627e-05, 5.64864551e-05,
6.86743791e-05, 6.90877021e-05, 6.53444393e-05, 5.64311212e-05,
5.47212311e-05, 5.87415418e-05, 5.51011903e-05, 5.96420633e-05,
5.09834748e-05, 5.68214547e-05, 6.36463428e-05, 5.90142079e-05,
6.37071295e-05, 6.31479191e-05, 5.52106181e-05, 5.82988324e-05,
5.12198359e-05, 5.31861871e-05, 4.77621244e-05, 5.82345313e-05,
4.33984309e-05, 5.31962634e-05, 6.48731535e-05, 5.99307716e-05,
5.74245258e-05, 3.88412941e-05, 4.62934114e-05, 4.80469646e-05,
5.03948341e-05, 5.42217253e-05, 4.85038080e-05, 4.91728399e-05,
4.91099787e-05, 4.46390228e-05, 5.46773705e-05, 4.59637262e-05,
4.93209973e-05, 3.57845793e-05, 4.47877782e-05, 4.40482223e-05,
4.84232225e-05, 3.96890804e-05, 5.10627469e-05, 4.53055547e-05,
4.30302383e-05, 4.36255600e-05, 4.54364912e-05, 4.54576504e-05,
4.10822418e-05, 4.42264057e-05, 4.62352548e-05, 4.80138761e-05,
4.01140813e-05, 4.13529669e-05, 4.74170664e-05, 4.77644760e-05,
4.01455048e-05, 3.93192684e-05, 3.61658056e-05, 4.26244670e-05,
5.20559547e-05, 3.29253435e-05, 3.91772747e-05, 3.93002047e-05,
3.82243628e-05, 3.96951610e-05, 4.13726616e-05, 4.21235362e-05,
3.20878033e-05, 3.52791576e-05, 3.98647390e-05, 3.67384189e-05,
3.95222859e-05, 3.96914281e-05, 3.33599974e-05, 3.13742493e-05,
3.77492407e-05, 3.88687449e-05, 4.18502685e-05, 2.75607171e-05,
3.49799432e-05, 4.29941601e-05, 3.40874184e-05, 3.31424257e-05,
3.23881135e-05, 3.10279467e-05, 2.97600366e-05, 3.38951446e-05,
3.69653017e-05, 3.44308555e-05, 3.34807858e-05, 3.32368096e-05,
3.24045191e-05, 3.42274586e-05, 3.47226801e-05, 3.33430915e-05,
2.79309207e-05, 2.85890056e-05, 3.58595400e-05, 3.56541692e-05,
2.97312185e-05, 2.98471656e-05, 2.84603558e-05, 3.38034256e-05,
3.14908877e-05, 2.89949398e-05, 3.59356966e-05, 2.96333483e-05,
2.91882065e-05, 3.17205845e-05, 3.48880484e-05, 2.61650545e-05,
3.15631282e-05, 3.00337860e-05, 2.64494983e-05, 3.10436426e-05,
3.57631331e-05, 3.17242386e-05, 3.30394319e-05, 2.98424226e-05,
3.10027253e-05, 2.61786697e-05, 3.59797317e-05, 2.72164745e-05,
3.21555684e-05, 3.13640141e-05, 2.65379518e-05, 3.16421046e-05,
2.32646534e-05, 2.83861205e-05, 3.42693878e-05, 3.06557252e-05,
2.81036550e-05, 2.89282012e-05, 3.16763804e-05, 2.61476123e-05,
2.85640047e-05, 2.95775793e-05, 2.67187261e-05, 2.92927165e-05,
3.26798613e-05, 2.62158112e-05, 3.17973663e-05, 2.95116461e-05,
2.69703722e-05, 2.98312826e-05, 3.11931343e-05, 2.58819861e-05,
3.16647839e-05, 2.87014896e-05, 2.61242043e-05, 2.57079454e-05,
2.58091499e-05, 2.50869160e-05, 2.54276074e-05, 2.44439218e-05,
2.52572719e-05, 2.98144631e-05, 2.66277431e-05, 3.03880570e-05,
2.78788636e-05, 2.52971662e-05, 2.98456606e-05, 2.75517614e-05,
2.47834735e-05, 2.99825369e-05, 2.51399202e-05, 2.51092295e-05,
2.67307775e-05, 2.34116817e-05, 2.27620334e-05, 2.66901206e-05,
2.05955245e-05, 2.65978777e-05, 2.44415114e-05, 2.44930174e-05,
2.51791081e-05, 2.25198507e-05, 2.41412615e-05, 3.01881015e-05,
2.25523290e-05, 3.06489814e-05, 2.49761975e-05, 2.29860177e-05,
2.67559223e-05, 2.62764172e-05, 2.59503061e-05, 2.48188521e-05,
2.77943655e-05, 2.71207179e-05, 3.08783681e-05, 2.56262648e-05,
2.84800395e-05, 2.23110420e-05, 2.64520744e-05, 2.45062819e-05,
2.28851184e-05, 2.39202185e-05, 2.50554904e-05, 2.46250750e-05,
2.45535212e-05, 2.09270435e-05, 2.12506122e-05, 1.78712553e-05,
2.00666320e-05, 2.58464272e-05, 2.35016439e-05, 2.32995872e-05,
2.46549862e-05, 2.67874896e-05, 2.09833627e-05, 2.18504040e-05,
2.50649930e-05, 2.28623793e-05, 2.33069247e-05, 2.42162495e-05,
2.87389611e-05, 2.17665913e-05, 2.43584154e-05, 2.69878208e-05,
2.18452473e-05, 2.55275529e-05, 2.38911265e-05, 2.16983190e-05,
2.37778250e-05, 2.44521271e-05, 2.57933783e-05, 2.06190911e-05,
2.43679377e-05, 2.41749073e-05, 2.15623674e-05, 2.38205203e-05,
2.13869346e-05, 2.38095406e-05, 2.25544675e-05, 2.15928046e-05,
2.54370470e-05, 2.02202610e-05, 2.23601433e-05, 2.59620109e-05,
2.51600457e-05, 1.83265793e-05, 2.22440731e-05, 2.22461351e-05,
1.98146961e-05, 2.13685791e-05, 2.45735398e-05, 2.37149622e-05,
2.09946835e-05, 2.33047596e-05, 2.04236775e-05, 2.28269774e-05,
2.49828520e-05, 2.02448525e-05, 1.86535318e-05, 2.45785739e-05,
2.40916779e-05, 2.18635797e-05, 2.22628701e-05, 2.05771608e-05,
2.42548135e-05, 2.20517232e-05, 2.36364135e-05, 1.99630472e-05,
2.61963168e-05, 1.95322214e-05, 1.98486112e-05, 2.10734331e-05,
2.11979535e-05, 2.04815286e-05, 2.56003950e-05, 2.40922633e-05,
2.35290452e-05, 2.73907962e-05, 2.22326104e-05, 2.34477154e-05,
1.76264270e-05, 2.00146122e-05, 2.01626249e-05, 2.34853924e-05,
2.09488146e-05, 2.15764152e-05, 2.30110121e-05, 2.09471986e-05,
2.41826684e-05, 2.15110636e-05, 2.11039012e-05, 1.98045826e-05,
2.60175138e-05, 2.04043463e-05, 2.25670355e-05, 2.19150862e-05,
2.12290561e-05, 2.29491312e-05, 2.08250752e-05, 1.94677239e-05,
1.99619307e-05, 1.94053737e-05, 2.02162720e-05, 2.03274024e-05,
1.82522614e-05, 2.07751154e-05, 2.33879107e-05, 2.47441318e-05])}, freqs=array([ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1,
1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. , 2.1, 2.2,
2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3. , 3.1, 3.2, 3.3,
3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4. , 4.1, 4.2, 4.3, 4.4,
4.5, 4.6, 4.7, 4.8, 4.9, 5. , 5.1, 5.2, 5.3, 5.4, 5.5,
5.6, 5.7, 5.8, 5.9, 6. , 6.1, 6.2, 6.3, 6.4, 6.5, 6.6,
6.7, 6.8, 6.9, 7. , 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7,
7.8, 7.9, 8. , 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8,
8.9, 9. , 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9,
10. , 10.1, 10.2, 10.3, 10.4, 10.5, 10.6, 10.7, 10.8, 10.9, 11. ,
11.1, 11.2, 11.3, 11.4, 11.5, 11.6, 11.7, 11.8, 11.9, 12. , 12.1,
12.2, 12.3, 12.4, 12.5, 12.6, 12.7, 12.8, 12.9, 13. , 13.1, 13.2,
13.3, 13.4, 13.5, 13.6, 13.7, 13.8, 13.9, 14. , 14.1, 14.2, 14.3,
14.4, 14.5, 14.6, 14.7, 14.8, 14.9, 15. , 15.1, 15.2, 15.3, 15.4,
15.5, 15.6, 15.7, 15.8, 15.9, 16. , 16.1, 16.2, 16.3, 16.4, 16.5,
16.6, 16.7, 16.8, 16.9, 17. , 17.1, 17.2, 17.3, 17.4, 17.5, 17.6,
17.7, 17.8, 17.9, 18. , 18.1, 18.2, 18.3, 18.4, 18.5, 18.6, 18.7,
18.8, 18.9, 19. , 19.1, 19.2, 19.3, 19.4, 19.5, 19.6, 19.7, 19.8,
19.9, 20. , 20.1, 20.2, 20.3, 20.4, 20.5, 20.6, 20.7, 20.8, 20.9,
21. , 21.1, 21.2, 21.3, 21.4, 21.5, 21.6, 21.7, 21.8, 21.9, 22. ,
22.1, 22.2, 22.3, 22.4, 22.5, 22.6, 22.7, 22.8, 22.9, 23. , 23.1,
23.2, 23.3, 23.4, 23.5, 23.6, 23.7, 23.8, 23.9, 24. , 24.1, 24.2,
24.3, 24.4, 24.5, 24.6, 24.7, 24.8, 24.9, 25. , 25.1, 25.2, 25.3,
25.4, 25.5, 25.6, 25.7, 25.8, 25.9, 26. , 26.1, 26.2, 26.3, 26.4,
26.5, 26.6, 26.7, 26.8, 26.9, 27. , 27.1, 27.2, 27.3, 27.4, 27.5,
27.6, 27.7, 27.8, 27.9, 28. , 28.1, 28.2, 28.3, 28.4, 28.5, 28.6,
28.7, 28.8, 28.9, 29. , 29.1, 29.2, 29.3, 29.4, 29.5, 29.6, 29.7,
29.8, 29.9, 30. , 30.1, 30.2, 30.3, 30.4, 30.5, 30.6, 30.7, 30.8,
30.9, 31. , 31.1, 31.2, 31.3, 31.4, 31.5, 31.6, 31.7, 31.8, 31.9,
32. , 32.1, 32.2, 32.3, 32.4, 32.5, 32.6, 32.7, 32.8, 32.9, 33. ,
33.1, 33.2, 33.3, 33.4, 33.5, 33.6, 33.7, 33.8, 33.9, 34. , 34.1,
34.2, 34.3, 34.4, 34.5, 34.6, 34.7, 34.8, 34.9, 35. , 35.1, 35.2,
35.3, 35.4, 35.5, 35.6, 35.7, 35.8, 35.9, 36. , 36.1, 36.2, 36.3,
36.4, 36.5, 36.6, 36.7, 36.8, 36.9, 37. , 37.1, 37.2, 37.3, 37.4,
37.5, 37.6, 37.7, 37.8, 37.9, 38. , 38.1, 38.2, 38.3, 38.4, 38.5,
38.6, 38.7, 38.8, 38.9, 39. , 39.1, 39.2, 39.3, 39.4, 39.5, 39.6,
39.7, 39.8, 39.9, 40. , 40.1, 40.2, 40.3, 40.4, 40.5, 40.6, 40.7,
40.8, 40.9, 41. , 41.1, 41.2, 41.3, 41.4, 41.5, 41.6, 41.7, 41.8,
41.9, 42. , 42.1, 42.2, 42.3, 42.4, 42.5, 42.6, 42.7, 42.8, 42.9,
43. , 43.1, 43.2, 43.3, 43.4, 43.5, 43.6, 43.7, 43.8, 43.9, 44. ,
44.1, 44.2, 44.3, 44.4, 44.5, 44.6, 44.7, 44.8, 44.9, 45. , 45.1,
45.2, 45.3, 45.4, 45.5, 45.6, 45.7, 45.8, 45.9, 46. , 46.1, 46.2,
46.3, 46.4, 46.5, 46.6, 46.7, 46.8, 46.9, 47. , 47.1, 47.2, 47.3,
47.4, 47.5, 47.6, 47.7, 47.8, 47.9, 48. , 48.1, 48.2, 48.3, 48.4,
48.5, 48.6, 48.7, 48.8, 48.9, 49. , 49.1, 49.2, 49.3, 49.4, 49.5,
49.6, 49.7, 49.8, 49.9, 50. ]))
# ---

View File

@ -0,0 +1,141 @@
# serializer version: 1
# name: test_single_dipole_time_series_apsd_new_series
APSDResult(psd_dict={'dot1': array([1.43141017e-04, 9.56103070e-04, 1.71013658e-04, 1.51108729e-03,
1.07724951e-03, 6.99571756e-04, 6.30720766e-03, 4.50084579e-04,
4.51029728e-05, 5.55641583e-04, 3.15751279e-03, 2.88808613e-03])}, freqs=array([0.4, 0.8, 1.2, 1.6, 2. , 2.4, 2.8, 3.2, 3.6, 4. , 4.4, 4.8]))
# ---
# name: test_single_dipole_time_series_psd
list([
list([
0j,
(1.0245563439837635+0j),
]),
list([
(0.4+0j),
(-0.06997893548071246-0.04298727993617441j),
]),
list([
(0.8+0j),
(-0.039450404619039356-0.2278123801105552j),
]),
list([
(1.2000000000000002+0j),
(0.06879271540452367+0.18112796265494657j),
]),
list([
(1.6+0j),
(0.08886792286401579-0.13758873132600968j),
]),
list([
(2+0j),
(0.11176347216411753-0.08120091560477474j),
]),
list([
(2.4000000000000004+0j),
(0.2691233202491688-0.010346527463152275j),
]),
list([
(2.8000000000000003+0j),
(0.2576145129611034-0.3358195628624988j),
]),
list([
(3.2+0j),
(-0.0710407429754301+0.08850346984926659j),
]),
list([
(3.6+0j),
(-0.22596960267390404+0.010400520410955974j),
]),
list([
(4+0j),
(0.016306070833852868-0.05018492576136252j),
]),
list([
(4.4+0j),
(0.1514065333713827-0.22893435273278817j),
]),
list([
(4.800000000000001+0j),
(-0.0024668424412067763+0.17633416864658075j),
]),
])
# ---
# name: test_single_dipole_time_series_psd_new_series
TimeSeriesResult(series_dict={'dot1': array([-0.0853797, 0.0853797, -0.0853797, -0.0853797, -0.0853797,
0.0853797, 0.0853797, -0.0853797, 0.0853797, -0.0853797,
0.0853797, -0.0853797, 0.0853797, -0.0853797, -0.0853797,
-0.0853797, 0.0853797, -0.0853797, -0.0853797, 0.0853797,
0.0853797, -0.0853797, -0.0853797, 0.0853797, 0.0853797])}, num_points=25, delta_t=0.1)
# ---
# name: test_two_dipole_time_series_apsd_new_series
APSDResult(psd_dict={'dot1': array([4.02710196e-02, 4.57457789e-02, 5.37775476e-02, 1.71538975e-01,
6.64050617e-02, 2.19812590e-02, 1.24038690e-02, 1.85105262e-02,
5.08871725e-02, 4.12988805e-02, 4.61602125e-02, 2.92425642e-04,
9.03014414e-03, 1.95843872e-05, 3.56775508e-03, 3.75290856e-05,
1.74192219e-02, 4.21181893e-03, 1.58904512e-04, 6.56289154e-03,
7.51516788e-03, 6.47104574e-04, 2.33974923e-03, 7.66894215e-04,
2.57584150e-03, 8.72574773e-04, 5.26689844e-03, 4.21974307e-03,
8.29932651e-04, 6.98429966e-03, 2.68044842e-03, 2.89487560e-03,
1.84343033e-03, 2.88976521e-03, 1.23220130e-03, 2.50296253e-04,
1.23659979e-03, 1.45996347e-03, 1.32959234e-03, 1.11989975e-03,
3.93904037e-03, 8.07172874e-03, 9.82477322e-04, 5.77348582e-04,
8.55728507e-04, 1.37189017e-03, 8.18999874e-04, 4.32091113e-04,
3.63993366e-03, 1.16635078e-04]), 'dot2': array([1.56844649e-01, 1.56126016e-01, 1.85730999e-01, 5.73293874e-01,
2.21480303e-01, 7.15829959e-02, 3.72699197e-02, 6.42246090e-02,
1.62293347e-01, 1.40013278e-01, 1.54181161e-01, 5.95226933e-04,
3.09772138e-02, 1.89287067e-04, 1.31537550e-02, 1.68175913e-04,
5.65200539e-02, 1.39687483e-02, 4.66137518e-04, 2.00658617e-02,
2.46647142e-02, 1.70799346e-03, 7.59034470e-03, 3.35916059e-03,
7.54176383e-03, 1.92217501e-03, 1.59281721e-02, 1.38594650e-02,
4.20932884e-03, 2.17324808e-02, 8.90924632e-03, 9.76777034e-03,
6.63256403e-03, 8.57467451e-03, 3.08820297e-03, 7.17980397e-04,
2.75319390e-03, 4.73466029e-03, 3.64705739e-03, 2.82026829e-03,
1.06999569e-02, 2.32474190e-02, 3.10842186e-03, 2.05800401e-03,
3.55445806e-03, 3.28394200e-03, 3.29915455e-03, 1.81990222e-03,
1.29296403e-02, 2.62413383e-04])}, freqs=array([ 0.2, 0.4, 0.6, 0.8, 1. , 1.2, 1.4, 1.6, 1.8, 2. , 2.2,
2.4, 2.6, 2.8, 3. , 3.2, 3.4, 3.6, 3.8, 4. , 4.2, 4.4,
4.6, 4.8, 5. , 5.2, 5.4, 5.6, 5.8, 6. , 6.2, 6.4, 6.6,
6.8, 7. , 7.2, 7.4, 7.6, 7.8, 8. , 8.2, 8.4, 8.6, 8.8,
9. , 9.2, 9.4, 9.6, 9.8, 10. ]))
# ---
# name: test_two_dipole_time_series_psd_new_series
TimeSeriesResult(series_dict={'dot1': array([ 0.28499068, 0.45575007, 0.45575007, 0.45575007, 0.45575007,
-0.28499068, -0.45575007, -0.45575007, -0.45575007, -0.28499068,
-0.45575007, -0.45575007, -0.28499068, -0.45575007, -0.28499068,
-0.45575007, -0.28499068, -0.28499068, -0.45575007, 0.28499068,
0.45575007, 0.28499068, 0.28499068, 0.45575007, 0.28499068,
0.28499068, 0.45575007, 0.45575007, 0.28499068, 0.28499068,
0.28499068, 0.28499068, 0.28499068, 0.28499068, -0.28499068,
-0.45575007, -0.45575007, 0.28499068, 0.28499068, 0.28499068,
0.28499068, 0.28499068, 0.28499068, 0.28499068, 0.28499068,
0.28499068, 0.28499068, 0.45575007, 0.28499068, 0.45575007,
-0.28499068, -0.28499068, -0.45575007, -0.45575007, -0.45575007,
-0.45575007, -0.28499068, -0.28499068, -0.28499068, -0.45575007,
-0.45575007, -0.28499068, -0.45575007, -0.28499068, -0.28499068,
-0.45575007, 0.45575007, 0.28499068, 0.45575007, 0.45575007,
0.28499068, 0.45575007, 0.45575007, 0.45575007, -0.28499068,
-0.28499068, -0.28499068, -0.28499068, -0.28499068, -0.45575007,
-0.28499068, -0.28499068, -0.28499068, -0.45575007, -0.45575007,
-0.28499068, -0.28499068, -0.28499068, -0.28499068, -0.28499068,
0.45575007, 0.45575007, 0.45575007, 0.45575007, 0.28499068,
-0.45575007, -0.45575007, -0.28499068, -0.45575007, -0.28499068]), 'dot2': array([ 0.55234807, 0.80847957, 0.80847957, 0.80847957, 0.80847957,
-0.55234807, -0.80847957, -0.80847957, -0.80847957, -0.55234807,
-0.80847957, -0.80847957, -0.55234807, -0.80847957, -0.55234807,
-0.80847957, -0.55234807, -0.55234807, -0.80847957, 0.55234807,
0.80847957, 0.55234807, 0.55234807, 0.80847957, 0.55234807,
0.55234807, 0.80847957, 0.80847957, 0.55234807, 0.55234807,
0.55234807, 0.55234807, 0.55234807, 0.55234807, -0.55234807,
-0.80847957, -0.80847957, 0.55234807, 0.55234807, 0.55234807,
0.55234807, 0.55234807, 0.55234807, 0.55234807, 0.55234807,
0.55234807, 0.55234807, 0.80847957, 0.55234807, 0.80847957,
-0.55234807, -0.55234807, -0.80847957, -0.80847957, -0.80847957,
-0.80847957, -0.55234807, -0.55234807, -0.55234807, -0.80847957,
-0.80847957, -0.55234807, -0.80847957, -0.55234807, -0.55234807,
-0.80847957, 0.80847957, 0.55234807, 0.80847957, 0.80847957,
0.55234807, 0.80847957, 0.80847957, 0.80847957, -0.55234807,
-0.55234807, -0.55234807, -0.55234807, -0.55234807, -0.80847957,
-0.55234807, -0.55234807, -0.55234807, -0.80847957, -0.80847957,
-0.55234807, -0.55234807, -0.55234807, -0.55234807, -0.55234807,
0.80847957, 0.80847957, 0.80847957, 0.80847957, 0.55234807,
-0.80847957, -0.80847957, -0.55234807, -0.80847957, -0.55234807])}, num_points=100, delta_t=0.05)
# ---

View File

@ -0,0 +1,36 @@
import pytest
import numpy
import tantri.dipoles.time_series
def test_apsd_merge():
freqs = numpy.array([0.01, 0.1, 1, 10, 100])
dict1 = {"t1": numpy.array([1, 2, 3, 4, 5])}
a1 = tantri.dipoles.time_series.APSDResult(dict1, freqs)
dict2 = {"t1": numpy.array([3, 4, 5, 6, 7])}
a2 = tantri.dipoles.time_series.APSDResult(dict2, freqs)
merged = tantri.dipoles.time_series.average_apsds([a1, a2])
expected = tantri.dipoles.time_series.APSDResult(
psd_dict={
"t1": numpy.array([2, 3, 4, 5, 6]),
},
freqs=freqs,
)
numpy.testing.assert_equal(merged.freqs, expected.freqs)
numpy.testing.assert_equal(merged.psd_dict, expected.psd_dict)
def test_apsd_merge_mismatch_freqs():
dict = {"t1": numpy.array([1, 2, 3, 4, 5])}
freqs1 = numpy.array([0.01, 0.1, 1, 10, 100])
a1 = tantri.dipoles.time_series.APSDResult(dict, freqs1)
freqs2 = numpy.array([1, 3, 5, 7, 9])
a2 = tantri.dipoles.time_series.APSDResult(dict, freqs2)
with pytest.raises(ValueError):
tantri.dipoles.time_series.average_apsds([a1, a2])

View File

@ -0,0 +1,83 @@
import numpy
from tantri.dipoles import (
DipoleTO,
DipoleTimeSeries,
DotPosition,
DipoleMeasurementType,
)
import logging
_logger = logging.getLogger(__name__)
def test_multiple_apsd(snapshot):
dot_name = "dot1"
num_series_to_create = 100
num_ts_points = 1000
delta_t = 0.01
rng = numpy.random.default_rng(1234)
dots = [DotPosition(numpy.array([0, 0, 0]), dot_name)]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
ts_gen = DipoleTimeSeries(
[d1],
dots,
DipoleMeasurementType.ELECTRIC_POTENTIAL,
delta_t,
rng_to_use=rng,
)
estimated_psd = ts_gen.generate_average_apsd(num_series_to_create, num_ts_points)
assert estimated_psd == snapshot
def test_multiple_apsd_compare_analytic():
dot_name = "dot1"
num_series_to_create = 500
num_ts_points = 10000
delta_t = 0.001
rng = numpy.random.default_rng(1234)
dots = [DotPosition(numpy.array([0, 0, 0]), dot_name)]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
ts_gen = DipoleTimeSeries(
[d1],
dots,
DipoleMeasurementType.ELECTRIC_POTENTIAL,
delta_t,
rng_to_use=rng,
)
def s_analytic_potential(f: float, dot: DotPosition, dipole: DipoleTO):
g = dipole.w / ((numpy.pi * f) ** 2 + dipole.w**2)
rdiff = dot.r - dipole.s
return 2 * g * ((rdiff.dot(dipole.p) / (numpy.linalg.norm(rdiff) ** 3)) ** 2)
estimated_psd = ts_gen.generate_average_apsd(num_series_to_create, num_ts_points)
_logger.warning(estimated_psd)
analytic = numpy.array(
[s_analytic_potential(f, dots[0], d1) for f in estimated_psd.freqs]
)
numpy.testing.assert_allclose(
estimated_psd.psd_dict[dot_name], analytic, rtol=1.5, atol=1e-7
)

View File

@ -0,0 +1,176 @@
"""
Testing whatever related to APSDs who gives a shit.
"""
from tantri.dipoles import (
DipoleTO,
DipoleTimeSeries,
DotPosition,
DipoleMeasurementType,
)
import numpy
import scipy.fft
def test_single_dipole_time_series_psd(snapshot):
dot_name = "dot1"
num_points = 25
delta_t = 0.1
rng = numpy.random.default_rng(1234)
dots = [DotPosition(numpy.array([0, 0, 0]), dot_name)]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
ts_gen = DipoleTimeSeries(
[d1],
dots,
DipoleMeasurementType.ELECTRIC_POTENTIAL,
delta_t,
rng_to_use=rng,
)
# time_series = [ts_gen.transition() for i in range(25)]
# time_series_to_dict_list = {k: [dic[k] for dic in time_series] for k in time_series[0]}
# fft_dict = {k: scipy.fft.rfft(v) for k, v in time_series_to_dict_list.items()}
time_series = [ts_gen.transition()[dot_name] for i in range(num_points)]
fft = scipy.fft.rfft(time_series)
freqs = scipy.fft.rfftfreq(num_points, delta_t)
result = numpy.array([freqs, fft]).transpose().tolist()
assert result == snapshot
def test_single_dipole_time_series_psd_new_series(snapshot):
dot_name = "dot1"
num_points = 25
delta_t = 0.1
rng = numpy.random.default_rng(1234)
dots = [DotPosition(numpy.array([0, 0, 0]), dot_name)]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
ts_gen = DipoleTimeSeries(
[d1],
dots,
DipoleMeasurementType.ELECTRIC_POTENTIAL,
delta_t,
rng_to_use=rng,
)
result = ts_gen.generate_series(num_points)
assert result == snapshot
def test_two_dipole_time_series_psd_new_series(snapshot):
num_points = 100
delta_t = 0.05
rng = numpy.random.default_rng(1234)
dots = [
DotPosition(numpy.array([0, 0, 0]), "dot1"),
DotPosition(numpy.array([1, 0, 0]), "dot2"),
]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
d2 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([2, 2, 1]),
2,
)
ts_gen = DipoleTimeSeries(
[d1, d2],
dots,
DipoleMeasurementType.ELECTRIC_POTENTIAL,
delta_t,
rng_to_use=rng,
)
result = ts_gen.generate_series(num_points)
assert result == snapshot
def test_single_dipole_time_series_apsd_new_series(snapshot):
dot_name = "dot1"
num_points = 25
delta_t = 0.1
rng = numpy.random.default_rng(1234)
dots = [DotPosition(numpy.array([0, 0, 0]), dot_name)]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
ts_gen = DipoleTimeSeries(
[d1],
dots,
DipoleMeasurementType.ELECTRIC_POTENTIAL,
delta_t,
rng_to_use=rng,
)
result = ts_gen.generate_series(num_points).get_apsds()
assert result == snapshot
def test_two_dipole_time_series_apsd_new_series(snapshot):
num_points = 100
delta_t = 0.05
rng = numpy.random.default_rng(1234)
dots = [
DotPosition(numpy.array([0, 0, 0]), "dot1"),
DotPosition(numpy.array([1, 0, 0]), "dot2"),
]
d1 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([5, 3, 2]),
15,
)
d2 = DipoleTO(
numpy.array([0, 0, 10]),
numpy.array([2, 2, 1]),
2,
)
ts_gen = DipoleTimeSeries(
[d1, d2],
dots,
DipoleMeasurementType.ELECTRIC_POTENTIAL,
delta_t,
rng_to_use=rng,
)
result = ts_gen.generate_series(num_points).get_apsds()
assert result == snapshot

28
tests/test_util.py Normal file
View File

@ -0,0 +1,28 @@
import typing
import tantri.util
import numpy
def test_mean_dict():
dict1 = {
"squares": numpy.array([1, 4, 9, 16]),
"linear": numpy.array([1, 2, 3, 4, 5]),
}
dict2 = {
"squares": numpy.array([2, 8, 18, 32]),
"linear": numpy.array([2, 4, 6, 8, 10]),
}
def mean(list_of_arrays: typing.Sequence[numpy.ndarray]) -> numpy.ndarray:
return numpy.mean(numpy.array(list_of_arrays), axis=0)
result = tantri.util.dict_reduce([dict1, dict2], mean)
expected = {
"squares": 1.5 * numpy.array([1, 4, 9, 16]),
"linear": 1.5 * numpy.array([1, 2, 3, 4, 5]),
}
numpy.testing.assert_equal(
result, expected, "The reduced dictionary should have matched the expected"
)