confusius.connectivity¶
connectivity ¶
Functional connectivity analysis for fUSI data.
Modules:
-
cap–Co-activation patterns (CAPs) analysis for fUSI data.
-
matrix–Connectivity matrix estimation from time series data.
-
seed–Seed-based functional connectivity maps for fUSI data.
Classes:
-
CAP–Co-activation pattern (CAP) analysis for fUSI data.
-
ConnectivityMatrix–Functional connectivity matrices from fUSI region time series.
-
SeedBasedMaps–Seed-based functional connectivity maps from fUSI data.
Functions:
-
covariance_to_correlation–Return the correlation matrix for a given covariance matrix.
-
precision_to_partial_correlation–Return the partial correlation matrix for a given precision matrix.
-
symmetric_matrix_to_vector–Return the flattened lower triangular part of a symmetric matrix.
-
vector_to_symmetric_matrix–Return the symmetric matrix given its flattened lower triangular part.
CAP ¶
Bases: BaseEstimator
Co-activation pattern (CAP) analysis for fUSI data.
CAP analysis consists in clustering all volumes in one or more recording using k-means. Note that classical k-means minimizes within-cluster deviations from cluster centers, which amounts to minimizing squared Euclidean distances. Convergence is not guaranteed when using standard k-means using other distances.
To allow for other metrics, this estimator changes the geometry according to
metric: Euclidean k-means for "euclidean", and spherical (cosine-based) k-means
for "cosine" and "correlation" after normalization preprocessing.
For "correlation" and "cosine", this estimator uses a custom Lloyd-style cosine
k-means with k-means++ initialization. For "euclidean", sklearn's
KMeans is used.
Preprocessing matters
Strong global structure can produce very similar CAPs across clusters.
Temporally standardizing each voxel via clean before
calling fit is often helpful (e.g.,
standardize_method="zscore").
Parameters:
-
(n_clusters¶int, default:10) –Number of CAPs to extract.
-
(metric¶('correlation', 'cosine', 'euclidean'), default:"correlation") –Clustering geometry:
"correlation": center each volume (subtract spatial mean), then L2-normalize and cluster with cosine k-means. Equivalent to Pearson-correlation geometry and sign-sensitive (anti-correlated volumes are far apart)."cosine": L2-normalize each volume (without centering), then cluster with cosine k-means."euclidean": cluster preprocessed volumes with Euclidean k-means (sklearnKMeans.
-
(update_rule¶('mean', 'weighted'), default:"mean") –Center update rule for cosine/correlation clustering:
"mean": standard spherical k-means — centers are the L2-normalized sum of assigned volumes. This is the theoretically correct update that minimizes the sum of cosine distances and is recommended."weighted": weights each volume by its cosine similarity to the current center before averaging, reducing the influence of low-confidence volumes on center updates.
-
(max_iter¶int, default:300) –Maximum assignment-update iterations per run. Stops early if labels no longer change.
-
(n_local_trials¶int or None, default:None) –Number of candidate centers evaluated greedily at each k-means++ seeding step. If
None, uses2 + int(np.log(n_clusters)), matching sklearn's default. Only used whenmetricis"correlation"or"cosine". -
(n_init¶int or {'auto'}, default:"auto") –Number of independent random initializations. If
"auto", this follows sklearn's k-means++ behavior and runs a single initialization. Applies to all metrics. -
(random_state¶int or None, default:0) –Seed for the random number generator.
Attributes:
-
caps_((cap, ...) xarray.DataArray) –CAP spatial maps, one per cluster.
capis the leading dimension; the remaining dimensions match the spatial dimensions of the data passed tofit. For"correlation"and"cosine"metrics, maps are unit-norm vectors in the preprocessed space.attrs["long_name"]is set to"CAP"andattrs["cmap"]to"coolwarm"so that plotting functions pick up sensible defaults automatically. -
labels_(list[DataArray]) –Per-recording CAP index sequences (0-based integer). Each element has
dims=["time"]and, when present, the time coordinates of the corresponding input recording. The list length equals the number of recordings passed tofit. -
scores_(list[DataArray]) –Per-recording quality score sequences, parallel to
labels_. Each element hasdims=["time"](float64) and carries the same time coordinates as the corresponding entry inlabels_. Higher scores always indicate stronger assignment to the nearest CAP:"correlation"/"cosine": cosine similarity to the assigned center, in the range [-1, 1]."euclidean": negative L2 distance to the assigned center (≤ 0, with 0 meaning the volume lies exactly on the center).
Examples:
>>> import numpy as np
>>> import xarray as xr
>>> from confusius.connectivity import CAP
>>>
>>> rng = np.random.default_rng(0)
>>> data = xr.DataArray(
... rng.standard_normal((200, 10, 20)),
... dims=["time", "y", "x"],
... )
>>>
>>> caps = CAP(n_clusters=5, random_state=0)
>>> caps.fit([data])
CAP(n_clusters=5, random_state=0)
>>> caps.caps_.dims
('cap', 'y', 'x')
>>> caps.caps_.sizes["cap"]
5
>>> len(caps.labels_)
1
>>> caps.labels_[0].dims
('time',)
>>> caps.labels_[0].sizes["time"]
200
References
-
Arthur, D. and Vassilvitskii, S. "k-means++: the advantages of careful seeding." ACM-SIAM Symposium on Discrete Algorithms (SODA), 2007. ↩
Methods:
-
compute_temporal_metrics–Compute temporal dynamics metrics for each recording.
-
fit–Fit co-activation patterns by clustering volumes across all recordings.
-
get_metadata_routing–Get metadata routing of this object.
-
get_params–Get parameters for this estimator.
-
predict–Assign recordings to CAPs using the fitted cluster centers.
-
score_samples–Compute per-volume quality scores for recordings.
-
select_n_clusters–Select the optimal number of clusters.
-
set_params–Set the parameters of this estimator.
compute_temporal_metrics ¶
compute_temporal_metrics(
score_threshold: float | None = None,
) -> Dataset
Compute temporal dynamics metrics for each recording.
Persistence is expressed in the time units of the recording when the labels_
DataArrays carry a time coordinate; otherwise in volumes. The temporal
resolution need not be constant: volume durations are derived from consecutive
differences of the time coordinate, so irregular sampling is handled correctly.
Parameters:
-
(score_threshold¶float or None, default:None) –Minimum per-volume quality score for inclusion. Volumes with
scores_[i][t] < score_thresholdare treated as unassigned and do not contribute to any metric numerator (temporal fraction, episode counts, persistence, or transitions). The total-volume denominator used fortemporal_fractionis kept fixed regardless of how many volumes are excluded. Censored volumes act as natural episode breaks: two runs of the same CAP separated only by censored volumes are counted as separate episodes. WhenNone, all volumes are included.
Returns:
-
Dataset–Dataset indexed by
recording(0-based) andcapwith variables:temporal_fraction(recording, cap): fraction of total volumes assigned to each CAP (denominator is always the total number of volumes, even when some are censored byscore_threshold).counts(recording, cap): number of contiguous episodes of each CAP.persistence(recording, cap): mean episode duration. Zero when the CAP never appears. Units are inherited from thetimecoordinate'sunitsattribute, or"time"when no such attribute exists, or"volumes"when notimecoordinate is present.transition_frequency(recording,): total number of CAP switches (censored volumes are skipped; transitions across censored gaps are not counted).transition_matrix(recording, cap_from, cap_to): row-normalized transition probability matrix. Rows sum to 1 when the corresponding CAP appears; zero rows indicate CAPs that never appear as the origin of a transition.
Raises:
-
NotFittedError–If the estimator has not been fitted yet.
fit ¶
Fit co-activation patterns by clustering volumes across all recordings.
Parameters:
-
(X¶list[DataArray] or DataArray) –One or more fUSI recordings to extract CAPs from. Each DataArray must have a
timedimension with at least 2 timepoints. All remaining dimensions are treated as spatial and flattened into a feature vector per volume. A single DataArray is treated as a single recording. -
(y¶None, default:None) –Ignored. Present for sklearn API compatibility.
Returns:
-
CAP–Fitted estimator.
Raises:
-
ValueError–If
metricorupdate_ruleis invalid, ifXis an empty list, or if any recording has notimedimension or fewer than 2 timepoints.
get_metadata_routing ¶
Get metadata routing of this object.
Please check :ref:User Guide <metadata_routing> on how the routing
mechanism works.
Returns:
-
routing(MetadataRequest) –A :class:
~sklearn.utils.metadata_routing.MetadataRequestencapsulating routing information.
predict ¶
Assign recordings to CAPs using the fitted cluster centers.
Parameters:
-
(X¶list[DataArray] or DataArray) –One or more fUSI recordings to assign. Each must have the same spatial dimensions as the data passed to
fit. A single DataArray is treated as a single recording.
Returns:
-
list[DataArray]–Per-recording CAP label sequences (0-based integer), one
(time,)DataArray per input recording. Time coordinates are preserved when present.
Raises:
-
NotFittedError–If the estimator has not been fitted yet.
-
ValueError–If any recording has no
timedimension or fewer than 2 timepoints.
score_samples ¶
Compute per-volume quality scores for recordings.
The score for each volume reflects how strongly it is assigned to its nearest CAP. Higher scores always indicate stronger assignment:
"correlation"/"cosine": cosine similarity to the assigned center, in the range [-1, 1]."euclidean": negative L2 distance to the assigned center (≤ 0, with 0 meaning the volume lies exactly on the center).
Parameters:
-
(X¶list[DataArray] or DataArray) –One or more fUSI recordings to score. Each must have the same spatial dimensions as the data passed to
fit. A single DataArray is treated as a single recording.
Returns:
-
list[DataArray]–Per-recording quality score sequences, one
(time,)DataArray per input recording. Time coordinates are preserved when present.
Raises:
-
NotFittedError–If the estimator has not been fitted yet.
-
ValueError–If any recording has no
timedimension or fewer than 2 timepoints.
select_n_clusters ¶
select_n_clusters(
X: list[DataArray] | DataArray,
cluster_range: range | list[int],
method: Literal[
"elbow",
"silhouette",
"davies_bouldin",
"variance_ratio",
] = "silhouette",
show_progress: bool = True,
progress: "Progress | None" = None,
) -> int
Select the optimal number of clusters.
Fits k-means for each value in cluster_range (preprocessing runs
only once) and returns the cluster count that optimizes method.
Parameters:
-
(X¶list[DataArray] or DataArray) –Same data that will later be passed to
fit. -
(cluster_range¶range or list[int]) –Values of
n_clustersto evaluate. Must contain at least 2 entries, each ≥ 2. -
(method¶('elbow', 'silhouette', 'davies_bouldin', 'variance_ratio'), default:"elbow") –Selection criterion:
"elbow": minimize cosine inertia (or euclidean inertia formetric="euclidean"); the elbow is found as the point of maximum perpendicular distance from the diagonal of the inertia curve."silhouette": maximize the silhouette score, computed with cosine distance formetric="correlation"or"cosine", and Euclidean distance formetric="euclidean"."davies_bouldin": minimize the Davies-Bouldin index (Euclidean, applied to the preprocessed volumes)."variance_ratio": maximize the Calinski-Harabasz index (Euclidean, applied to the preprocessed volumes).
-
(show_progress¶bool, default:True) –Whether to display a progress bar while evaluating cluster counts.
-
(progress¶Progress, default:None) –External
rich.progress.Progressinstance to add tasks to. If provided andshow_progressisTrue, a task is added to this instance instead of creating a new progress bar withrich.progress.track.
Returns:
-
int–Recommended number of clusters.
Raises:
-
ValueError–If
metric,update_rule, ormethodis invalid, or ifcluster_rangehas fewer than 2 entries or any entry is < 2.
set_params ¶
set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as :class:~sklearn.pipeline.Pipeline). The latter have
parameters of the form <component>__<parameter> so that it's
possible to update each component of a nested object.
Parameters:
Returns:
-
self(estimator instance) –Estimator instance.
ConnectivityMatrix ¶
Bases: BaseEstimator
Functional connectivity matrices from fUSI region time series.
Computes pairwise connectivity matrices between brain regions from time series DataArrays using one of several estimators: covariance, correlation, partial correlation, precision, or tangent-space projection. Supports both single-subject and group-level analysis.
Parameters:
-
(cov_estimator¶sklearn covariance estimator, default:None) –Estimator used to compute covariance matrices. Defaults to
LedoitWolf(store_precision=False), which applies a small shrinkage towards zero compared to the maximum-likelihood estimate. -
(kind¶(covariance, correlation, 'partial correlation', tangent, precision), default:"covariance") –Type of connectivity matrix to compute.
"covariance": raw covariance matrix."correlation": Pearson correlation matrix."partial correlation": partial correlation matrix, controlling for all other variables."precision": inverse of the covariance matrix."tangent": symmetric displacement in the tangent space at the group geometric mean. Requires at least two subjects infit_transform.
-
(vectorize¶bool, default:False) –Whether connectivity matrices should be flattened to 1D vectors containing only the lower triangular elements.
-
(discard_diagonal¶bool, default:False) –Whether diagonal elements should be excluded from the vectorized output. Only used when
vectorizeisTrue.
Attributes:
-
cov_estimator_(sklearn covariance estimator) –A copy of
cov_estimatorwith the same parameters, used during fitting. -
mean_((n_features, n_features) numpy.ndarray) –Mean connectivity matrix across subjects. For
"tangent"kind, this is the geometric mean of the covariance matrices. For other kinds, it is the arithmetic mean. -
whitening_((n_features, n_features) numpy.ndarray or None) –Inverse square-root of the geometric mean covariance. Only set for
"tangent"kind;Noneotherwise. -
n_features_in_(int) –Number of features seen during
fit. -
features_dim_in_(str) –Name of the features dimension in the input DataArrays.
Notes
Adapted from Nilearn's
ConnectivityMeasure (BSD-3-Clause
License; see NOTICE for attribution).
References
-
Varoquaux, G., Baronnet, F., Kleinschmidt, A., Fillard, P., Thirion, B. (2010). Detection of Brain Functional-Connectivity Difference in Post-stroke Patients Using Group-Level Covariance Modeling. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2010. MICCAI 2010. Lecture Notes in Computer Science, vol 6361. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15705-9_25 ↩
Examples:
>>> import numpy as np
>>> import xarray as xr
>>> from confusius.connectivity import ConnectivityMatrix
>>>
>>> rng = np.random.default_rng(0)
>>> # Five subjects, each with 100 time points and 10 brain regions.
>>> signals = [
... xr.DataArray(
... rng.standard_normal((100, 10)),
... dims=["time", "regions"],
... )
... for _ in range(5)
... ]
>>>
>>> measure = ConnectivityMatrix(kind="correlation")
>>> connectivities = measure.fit_transform(signals)
>>> connectivities.shape
(5, 10, 10)
>>>
>>> # Vectorized output.
>>> measure_vec = ConnectivityMatrix(kind="correlation", vectorize=True)
>>> vecs = measure_vec.fit_transform(signals)
>>> vecs.shape
(5, 55)
Methods:
-
fit–Fit the covariance estimator and compute group-level statistics.
-
fit_transform–Fit and transform in one step, computing covariances only once.
-
get_metadata_routing–Get metadata routing of this object.
-
get_params–Get parameters for this estimator.
-
inverse_transform–Reconstruct connectivity matrices from vectorized or tangent-space forms.
-
set_params–Set the parameters of this estimator.
-
transform–Compute connectivity matrices for new subjects.
fit ¶
Fit the covariance estimator and compute group-level statistics.
Parameters:
-
(X¶DataArray or list[DataArray]) –Time series for each subject. Each DataArray must have a
timedimension and exactly one additional dimension (the features/regions dimension). The number of timepoints may differ across subjects, but the features dimension must have the same name and size. -
(y¶None, default:None) –Ignored. Present for sklearn API compatibility.
Returns:
-
ConnectivityMatrix–Fitted estimator.
Raises:
-
TypeError–If
Xis not a DataArray or list of DataArrays. -
ValueError–If any subject is missing the
timedimension, has an incorrect number of dimensions, has inconsistent feature sizes, or ifkindis not one of the allowed values.
Notes
Dask-backed DataArrays are computed in memory during fit when covariance
matrices are estimated. This class is inherently eager: covariance estimation
requires the full time series.
fit_transform ¶
Fit and transform in one step, computing covariances only once.
Parameters:
-
(X¶DataArray or list[DataArray]) –Time series for each subject. Each DataArray must have a
timedimension and exactly one additional features dimension. The number of timepoints may differ across subjects, but the features dimension must be consistent. -
(y¶None, default:None) –Ignored. Present for sklearn API compatibility.
Returns:
-
(n_subjects, n_features, n_features) numpy.ndarray or (n_subjects, n_features * (n_features + 1) / 2) numpy.ndarray–Connectivity matrices, or their vectorized lower triangular parts when
vectorizeisTrue.
Raises:
-
TypeError–If
Xis not a DataArray or list of DataArrays. -
ValueError–If subjects have inconsistent features dimensions, if
kindis not valid, or ifkind="tangent"is used with a single subject (tangent space returns deviations from a group mean, which is trivially zero for a single subject).
get_metadata_routing ¶
Get metadata routing of this object.
Please check :ref:User Guide <metadata_routing> on how the routing
mechanism works.
Returns:
-
routing(MetadataRequest) –A :class:
~sklearn.utils.metadata_routing.MetadataRequestencapsulating routing information.
inverse_transform ¶
inverse_transform(
connectivities: NDArray, diagonal: NDArray | None = None
) -> NDArray
Reconstruct connectivity matrices from vectorized or tangent-space forms.
Parameters:
-
(connectivities¶(n_subjects, n_features, n_features) numpy.ndarray or (n_subjects, n_features * (n_features + 1) / 2) numpy.ndarray or (n_subjects, (n_features - 1) * n_features / 2) numpy.ndarray) –Connectivity matrices or their vectorized forms. When
kind="tangent", these are tangent space displacements that are mapped back to covariance matrices. -
(diagonal¶(ndarray, shape(n_subjects, n_features)), default:None) –Diagonal values to restore when
discard_diagonalwasTrue. Required for"covariance"and"precision"kinds when the diagonal was discarded; for"correlation"and"partial correlation", a diagonal of ones is assumed automatically.
Returns:
-
(ndarray, shape(n_subjects, n_features, n_features))–Reconstructed connectivity matrices. For
"tangent"kind, these are the original covariance matrices.
Raises:
-
NotFittedError–If the estimator has not been fitted yet.
-
ValueError–If the diagonal was discarded for an ambiguous kind (
"covariance"or"precision") and nodiagonalis provided.
set_params ¶
set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as :class:~sklearn.pipeline.Pipeline). The latter have
parameters of the form <component>__<parameter> so that it's
possible to update each component of a nested object.
Parameters:
Returns:
-
self(estimator instance) –Estimator instance.
transform ¶
Compute connectivity matrices for new subjects.
Parameters:
-
(X¶DataArray or list[DataArray]) –Time series for each subject. The features dimension name and size must match the values seen during
fit.
Returns:
-
(n_subjects, n_features, n_features) numpy.ndarray or–(n_subjects, n_features * (n_features + 1) / 2) numpy.ndarrayConnectivity matrices, or their vectorized lower triangular parts when
vectorizeisTrue.
Raises:
-
NotFittedError–If the estimator has not been fitted yet.
-
ValueError–If any subject has a features dimension that does not match
features_dim_in_orn_features_in_.
SeedBasedMaps ¶
Bases: BaseEstimator
Seed-based functional connectivity maps from fUSI data.
Computes voxel-wise Pearson correlation maps between one or more seed region signals and every voxel in a fUSI DataArray.
Two ways to supply the seed signal are supported:
- Mask-based (
seed_masks): integer label maps are passed and the seed signals are extracted from the (optionally cleaned) data viaextract_with_labels. Signal cleaning viacleanis applied to the full data array before seed extraction so that both the seed signal and the per-voxel signals are preprocessed consistently. - Signal-based (
seed_signals): pre-computed(time, region)seed signals are provided directly. In this case extraction is skipped entirely and the supplied signals are correlated against the (optionally cleaned) data. This is useful when the seed signal has been computed externally or originates from a different modality.
Exactly one of seed_masks or seed_signals must be provided.
Parameters:
-
(seed_masks¶DataArray, default:None) –Integer label maps defining the seed region(s). Two formats are accepted (same as
extract_with_labels):- Flat label map: spatial dims only, e.g.
(z, y, x). Background voxels are0; each unique non-zero integer is a separate seed region. - Stacked mask format: leading
maskdim followed by spatial dims, e.g.(mask, z, y, x). Each layer has values in{0, region_id}and regions may overlap.
A boolean mask can be used by converting it first:
mask.astype(int). Mutually exclusive withseed_signals. - Flat label map: spatial dims only, e.g.
-
(seed_signals¶DataArray, default:None) –Pre-computed seed signals with a
timedimension and an optionalregiondimension. When provided, seed extraction from the data is skipped and these signals are used directly to compute Pearson correlations.clean_kwargsis still applied to the data array before computing correlations, but the seed signals themselves are used as-is. Mutually exclusive withseed_masks. -
(labels_reduction¶('mean', 'sum', 'median', 'min', 'max', 'var', 'std'), default:"mean") –Aggregation function applied across voxels within each seed region when extracting seed signals from
seed_masks. Ignored whenseed_signalsis provided. -
(clean_kwargs¶dict, default:None) –Keyword arguments forwarded to
clean. Cleaning is applied to the full data array before computing correlations. If not provided, no cleaning is applied.Chunking along time
Any operation in
clean_kwargsthat involves detrending or filtering requires thetimedimension to be un-chunked. Rechunk your data before callingfit:data.chunk({'time': -1}).
Attributes:
-
seed_signals_((time, region) xarray.DataArray) –Extracted (and cleaned) seed region signals when
seed_masksis used, or the supplied signals (possibly transposed to(time, region)order) whenseed_signalsis used. Set afterfit. -
maps_((region, ...) xarray.DataArray) –Voxel-wise Pearson r maps, one per seed region, set after
fit.regionis the leading dimension; the remaining dimensions match the spatial dimensions of the data passed tofit. If a single region is present theregiondimension is squeezed out.attrs["cmap"]is set to"coolwarm",attrs["norm"]toNormalize(vmin=-1, vmax=1), andattrs["long_name"]to"Pearson r"so that plotting functions pick up sensible defaults automatically.
Examples:
Mask-based usage: two seed regions from a flat integer label map.
>>> import numpy as np
>>> import xarray as xr
>>> from confusius.connectivity import SeedBasedMaps
>>>
>>> rng = np.random.default_rng(0)
>>> data = xr.DataArray(
... rng.standard_normal((200, 10, 20)),
... dims=["time", "y", "x"],
... coords={"time": np.arange(200) * 0.1},
... )
>>>
>>> labels = xr.DataArray(
... np.zeros((10, 20), dtype=int),
... dims=["y", "x"],
... )
>>> labels[:3, :] = 1 # Region 1: first 3 y-slices.
>>> labels[3:6, :] = 2 # Region 2: next 3 y-slices.
>>>
>>> mapper = SeedBasedMaps(seed_masks=labels)
>>> mapper.fit(data)
SeedBasedMaps(seed_masks=...)
>>> mapper.maps_.dims
('region', 'y', 'x')
>>> mapper.maps_.coords["region"].values
array([1, 2])
>>>
>>> # Single seed from a boolean mask converted to integer.
>>> mask = xr.DataArray(
... np.zeros((10, 20), dtype=bool),
... dims=["y", "x"],
... )
>>> mask[:3, :] = True
>>> mapper_single = SeedBasedMaps(seed_masks=mask.astype(int))
>>> mapper_single.fit(data)
SeedBasedMaps(seed_masks=...)
>>> mapper_single.maps_.dims # region dim is squeezed for a single seed
('y', 'x')
Signal-based usage: provide seed signals directly.
>>> seed_signal = xr.DataArray(
... rng.standard_normal(200),
... dims=["time"],
... coords={"time": np.arange(200) * 0.1},
... )
>>> mapper_sig = SeedBasedMaps(seed_signals=seed_signal)
>>> mapper_sig.fit(data)
SeedBasedMaps(seed_signals=...)
>>> mapper_sig.maps_.dims # single signal, region dim squeezed
('y', 'x')
Methods:
-
fit–Compute the seed-based correlation maps.
-
get_metadata_routing–Get metadata routing of this object.
-
get_params–Get parameters for this estimator.
-
set_params–Set the parameters of this estimator.
fit ¶
Compute the seed-based correlation maps.
Parameters:
-
(X¶(time, ...) xarray.DataArray) –A fUSI DataArray to estimate seed-based maps from. Must have a
timedimension. The spatial dimensions must be compatible withseed_maskswhen using mask-based seeding.Chunking along time
The
timedimension must NOT be chunked whenclean_kwargsincludes detrending or filtering steps. Rechunk first:X.chunk({'time': -1}). -
(y¶None, default:None) –Ignored. Present for sklearn API compatibility.
Returns:
-
SeedBasedMaps–Fitted estimator.
Raises:
-
ValueError–If neither or both of
seed_masksandseed_signalsare provided, ifXhas notimedimension, fewer than 2 timepoints, or if thetimedimension is chunked when required. Whenseed_signalsis provided: also raised if it has unexpected dimensions, atimesize that differs fromX, ortimecoordinates that do not matchX. -
TypeError–If
seed_masksis not an integer-dtype DataArray.
get_metadata_routing ¶
Get metadata routing of this object.
Please check :ref:User Guide <metadata_routing> on how the routing
mechanism works.
Returns:
-
routing(MetadataRequest) –A :class:
~sklearn.utils.metadata_routing.MetadataRequestencapsulating routing information.
set_params ¶
set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as :class:~sklearn.pipeline.Pipeline). The latter have
parameters of the form <component>__<parameter> so that it's
possible to update each component of a nested object.
Parameters:
Returns:
-
self(estimator instance) –Estimator instance.
covariance_to_correlation ¶
covariance_to_correlation(covariance: NDArray) -> NDArray
Return the correlation matrix for a given covariance matrix.
Parameters:
Returns:
-
(ndarray, shape(n_features, n_features))–Correlation matrix. Diagonal elements are exactly
1.
Notes
Adapted from
nilearn.connectome.connectivity_matrices
(BSD-3-Clause License; see NOTICE for attribution).
precision_to_partial_correlation ¶
Return the partial correlation matrix for a given precision matrix.
Parameters:
-
(precision¶(ndarray, shape(n_features, n_features))) –Input precision matrix (inverse of the covariance matrix).
Returns:
-
(ndarray, shape(n_features, n_features))–Partial correlation matrix. Diagonal elements are exactly
1.
Notes
Adapted from
nilearn.connectome.connectivity_matrices
(BSD-3-Clause License; see NOTICE for attribution).
symmetric_matrix_to_vector ¶
symmetric_matrix_to_vector(
symmetric: NDArray, discard_diagonal: bool = False
) -> NDArray
Return the flattened lower triangular part of a symmetric matrix.
Diagonal elements are divided by sqrt(2) when discard_diagonal is False
so that the Frobenius norm is preserved under the vectorization.
Acts on the last two dimensions if the input is not 2D.
Parameters:
-
(symmetric¶(..., n_features, n_features) numpy.ndarray) –Input symmetric matrix or batch of symmetric matrices.
-
(discard_diagonal¶bool, default:False) –Whether diagonal elements should be omitted from the output.
Returns:
-
ndarray–Flattened lower triangular part. Shape is
(..., n_features * (n_features + 1) / 2)whendiscard_diagonalisFalseand(..., (n_features - 1) * n_features / 2)otherwise.
Notes
Adapted from
nilearn.connectome.connectivity_matrices
(BSD-3-Clause License; see NOTICE for attribution).
vector_to_symmetric_matrix ¶
Return the symmetric matrix given its flattened lower triangular part.
This is the inverse of
symmetric_matrix_to_vector.
Diagonal elements are multiplied by sqrt(2) to invert the norm-preserving scaling
applied during vectorization. Acts on the last dimension of the input if it is not
1D.
Parameters:
-
(vec¶(..., n * (n + 1) / 2) numpy.ndarray or (..., (n - 1) * n / 2) numpy.ndarray) –Vectorized lower triangular part. The diagonal may be included in
vecor supplied separately viadiagonal. -
(diagonal¶(ndarray, shape(..., n)), default:None) –Diagonal values to insert. When provided,
vecis assumed to contain only the off-diagonal elements anddiagonalsupplies the main diagonal.
Returns:
-
(ndarray, shape(..., n, n))–Reconstructed symmetric matrix.
Raises:
-
ValueError–If
vechas a length that does not correspond to a valid triangular number, or ifdiagonalhas an incompatible shape.
Notes
Adapted from
nilearn.connectome.connectivity_matrices
(BSD-3-Clause License; see NOTICE for attribution).