Group analysis¶
One of the main purpose for doing whole-brain image analysis, is to statistically compare different cohorts and find the brian regions that had a significant difference in activation. Coincidently, this is exactly the original purpose for which braian
was developed!
But first of all, we need to read raw data exported using the previous tutorial:
import braian
import braian.config
import braian.plot as bap
import braian.stats as bas
import plotly.io as pio
from pathlib import Path
# This ensures BraiAn's figures works in multiple places:
pio.renderers.default = "plotly_mimetype+notebook"
root_dir = Path.cwd().absolute().parent # braian experiment root
config_file = root_dir/"config_example.yml" # configuration path
config = braian.config.BraiAnConfig(config_file, "/tmp") # we instantiate the config
ontology = config.read_atlas_ontology()
experiment = config.experiment_from_csv(fill_nan=False)
Normalizations¶
However raw cell counts is not an appropriate metric to use for group comparisons. Most likely, bigger regions will have a larger difference between cohorts, while fluctuation in activity in smaller regions might slip by.
braian.stats
offers multiple metrics for this matter. One of which for cell density:
$$
\frac {c_r} {area_r}
$$
where $c_r$ is the number of cell detections in region $r$.
Let's create animal groups of marker densities:
group_hc = braian.AnimalGroup(
experiment.hc.name,
[bas.density(b.merge_hemispheres()) for b in experiment.hc.animals],
hemisphere_distinction=True,
brain_ontology=ontology,
fill_nan=True
)
group_ctx = braian.AnimalGroup(
experiment.ctx.name,
[bas.density(b.merge_hemispheres()) for b in experiment.ctx.animals],
hemisphere_distinction=True,
brain_ontology=ontology,
fill_nan=True
)
group_fc = braian.AnimalGroup(
experiment.fc.name,
[bas.density(b.merge_hemispheres()) for b in experiment.fc.animals],
hemisphere_distinction=True,
brain_ontology=ontology,
fill_nan=True
)
We can view data at different granularity levels of the brain ontology!
for granularity, height in zip(("major divisions", "summary structures", "leaves"), (500, 3000, 5000)):
print(granularity)
granularity_regions = ontology.get_regions(granularity),
bap.xmas_tree(
(group_hc, group_ctx, group_fc),
granularity_regions,
marker1="cFos", marker2="Arc",
groups_marker1_colours=["LightCoral", "SandyBrown", "green"],
groups_marker2_colours=["IndianRed", "Orange", "lightgreen"],
height=height
).show()
major divisions
summary structures
leaves
Task mean-centred PLS analysis¶
Often, in behavioural neuroscience, you wish to analyise the relationship between brain activity and experimental design, with the aim to search for statistical differences and—if any—expose the brain regions contributing the most to such differences.
braian.stats.PLS
implements the task partial least square correlation (Task PLSC) as explained in Krishnan et al. 2011, and helps you in the group analysis.
Here we want to compare brain activities recorded through cFos between all three groups (home cage, novel context and fear conditioned mice); we want to do this comparing 295 brain regions included in the summary structures.
pls = bas.PLS(
ontology.get_regions("summary structures"),
group_hc, group_ctx, group_fc,
marker="cFos"
)
Generalizing the results with PLS is done by permutation testing:
pls.random_permutation(10_000, seed=42)
print("p-values:")
for i,p in enumerate(pls.test_null_hypothesis()):
print(f"\tcomponent {i+1}: {p}")
p-values: component 1: 0.0041 component 2: 0.5484 component 3: 0.0163
bap.permutation(pls, component=1).update_layout(width=650)
bap.groups_salience(pls, component=1).update_layout(width=400)
bap.latent_variable(pls, of="X", width=650, height=650)
pls.bootstrap_salience_scores(10_000, seed=42)
ths = bas.PLS.to_zscore(0.05)
bap.region_scores(pls.above_threshold(ths), ontology, width=None, thresholds=ths)