# CAMB Documentation
---
## https://camb.readthedocs.io/en/latest/index.html
# CAMB
CAMB (Code for Anisotropies in the Microwave Background), a cosmology code for calculating CMB, lensing,
galaxy count, dark-age 21cm power spectra, matter power spectra and transfer functions.
There are also general utility function for cosmological calculations such as the background expansion, distances, etc.
The main code is Python with numerical calculations implemented efficiently in Python-wrapped modern Fortran.
See the [CAMB python example notebook](https://camb.readthedocs.io/en/latest/CAMBdemo.html) for an
introductory set of examples of how to use the CAMB package. This is usually the fastest way to learn how to use it
and quickly see some of the capabilities. There’s also an [AI help assistant](https://cosmocoffee.info/help_assist.php),
with up-to-date knowledge of the full Python documentation,
and you can use with code execution in [CMB Agent](https://github.com/CMBAgents/cmbagent/).
There are also [technical notes](https://cosmologist.info/notes/CAMB.pdf),
the [symbolic equation notebook](https://camb.readthedocs.io/en/latest/ScalEqs.html),
and an [LLM context file](https://camb.readthedocs.io/en/latest/_static/camb_docs_combined.md) that you can use in
system prompts or as part of a documentation database.
For a standard non-editable installation use:
```default
pip install camb [--user]
```
The –user is optional and only required if you don’t have write permission to your main python installation.
If you want to work on the code from [GitHub](https://github.com/cmbant/camb), you can install using:
```default
git clone --recursive https://github.com/cmbant/CAMB.git
pip install -e ./CAMB [--user]
```
You will need ifort or gfortran 6 or higher installed (and on your path) to compile from source;
you can see [Fortran compilers](fortran_compilers.md#fortran-compilers) for compiler installation details if needed.
If you have gfortran installed, “python setup.py make” will build
the Fortran library on all systems (including Windows without directly using a Makefile), and can be used to update
a source installation after changes or pulling an updated version.
The standard pip installation includes binary pre-compiled code, so no need for a Fortran compiler
(unless you want to use custom sources/symbolic compilation features).
Anaconda users can also install from conda-forge, best making a new clean environment using:
```default
conda create -n camb -c conda-forge python=3.13 camb
activate camb
```
Check that conda installs the latest version, if not try installing
in a new clean conda environment as above.
After installation the camb python module can be loaded from your scripts using “import camb”.
You can also run CAMB from the command line reading parameters from a .ini file, e.g.:
```default
camb inifiles/planck_2018.ini
```
Sample .ini files can be obtained from the [repository](https://github.com/cmbant/CAMB/tree/master/inifiles). You can load parameters programmatically from an .ini file or URL using [`camb.read_ini()`](camb.md#camb.read_ini).
Main high-level modules:
* [Basic functions](camb.md)
* [`get_age()`](camb.md#camb.get_age)
* [`get_background()`](camb.md#camb.get_background)
* [`get_matter_power_interpolator()`](camb.md#camb.get_matter_power_interpolator)
* [`get_results()`](camb.md#camb.get_results)
* [`get_transfer_functions()`](camb.md#camb.get_transfer_functions)
* [`get_valid_numerical_params()`](camb.md#camb.get_valid_numerical_params)
* [`get_zre_from_tau()`](camb.md#camb.get_zre_from_tau)
* [`read_ini()`](camb.md#camb.read_ini)
* [`run_ini()`](camb.md#camb.run_ini)
* [`set_feedback_level()`](camb.md#camb.set_feedback_level)
* [`set_params()`](camb.md#camb.set_params)
* [`set_params_cosmomc()`](camb.md#camb.set_params_cosmomc)
* [Input parameter model](model.md)
* [`CAMBparams`](model.md#camb.model.CAMBparams)
* [`AccuracyParams`](model.md#camb.model.AccuracyParams)
* [`TransferParams`](model.md#camb.model.TransferParams)
* [`SourceTermParams`](model.md#camb.model.SourceTermParams)
* [`CustomSources`](model.md#camb.model.CustomSources)
* [Calculation results](results.md)
* [`CAMBdata`](results.md#camb.results.CAMBdata)
* [`MatterTransferData`](results.md#camb.results.MatterTransferData)
* [`ClTransferData`](results.md#camb.results.ClTransferData)
* [Symbolic manipulation](symbolic.md)
* [`LinearPerturbation()`](symbolic.md#camb.symbolic.LinearPerturbation)
* [`camb_fortran()`](symbolic.md#camb.symbolic.camb_fortran)
* [`cdm_gauge()`](symbolic.md#camb.symbolic.cdm_gauge)
* [`compile_source_function_code()`](symbolic.md#camb.symbolic.compile_source_function_code)
* [`f_K`](symbolic.md#camb.symbolic.f_K)
* [`get_hierarchies()`](symbolic.md#camb.symbolic.get_hierarchies)
* [`get_scalar_temperature_sources()`](symbolic.md#camb.symbolic.get_scalar_temperature_sources)
* [`make_frame_invariant()`](symbolic.md#camb.symbolic.make_frame_invariant)
* [`newtonian_gauge()`](symbolic.md#camb.symbolic.newtonian_gauge)
* [`synchronous_gauge()`](symbolic.md#camb.symbolic.synchronous_gauge)
Other modules:
* [BBN models](bbn.md)
* [Dark Energy models](dark_energy.md)
* [Initial power spectra](initialpower.md)
* [Non-linear models](nonlinear.md)
* [Reionization models](reionization.md)
* [Recombination models](recombination.md)
* [Source windows functions](sources.md)
* [Correlation functions](correlations.md)
* [Post-Born lensing](postborn.md)
* [Lensing emission angle](emission_angle.md)
* [Matter power spectrum and matter transfer function variables](transfer_variables.md)
* [Modifying the code](modifying_code.md)
* [CAMB Variables and Gauge Conventions](variables_guide.md)
* [Fortran compilers](fortran_compilers.md)
* [Maths utils](mathutils.md)
* [Example notebook](https://camb.readthedocs.io/en/latest/CAMBdemo.html)
* [Index](genindex.md)
---
[](https://www.sussex.ac.uk/astronomy/)[](https://erc.europa.eu/)[](https://stfc.ukri.org/)
## https://camb.readthedocs.io/en/latest/camb.html
# Basic functions
CAMB, Code for Anisotropies in the Microwave Background ([https://camb.info](https://camb.info))
Computational modules are wrapped Fortran 2003, but can be used entirely from Python.
### camb.get_age(params)
Get age of universe for given set of parameters
* **Parameters:**
**params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
* **Returns:**
age of universe in Julian gigayears
### camb.get_background(params, no_thermo=False)
Calculate background cosmology for specified parameters and return [`CAMBdata`](results.md#camb.results.CAMBdata), ready to get derived
parameters and use background functions like [`angular_diameter_distance()`](results.md#camb.results.CAMBdata.angular_diameter_distance).
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
* **no_thermo** – set True if thermal and ionization history not required.
* **Returns:**
[`CAMBdata`](results.md#camb.results.CAMBdata) instance
### camb.get_matter_power_interpolator(params, zmin=0.0, zmax=10.0, nz_step=100, zs=None, kmax=10.0, nonlinear=True, var1=None, var2=None, hubble_units=True, k_hunit=True, return_z_k=False, k_per_logint=None, log_interp=True, extrap_kmax=None)
Return a 2D spline interpolation object to evaluate matter power spectrum as function of z and k/h, e.g.
```python
from camb import get_matter_power_interpolator
PK = get_matter_power_interpolator(params)
print("Power spectrum at z=0.5, k/h=0.1/Mpc is %s (Mpc/h)^3 " % (PK.P(0.5, 0.1)))
```
For a description of outputs for different var1, var2 see [Matter power spectrum and matter transfer function variables](transfer_variables.md#transfer-variables).
This function re-calculates results from scratch with the given parameters.
If you already have a [`CAMBdata`](results.md#camb.results.CAMBdata) result object, you should instead
use [`get_matter_power_interpolator()`](results.md#camb.results.CAMBdata.get_matter_power_interpolator)
(call [`model.CAMBparams.set_matter_power()`](model.md#camb.model.CAMBparams.set_matter_power) as need to set up the required ranges for the matter power
before calling get_results).
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
* **zmin** – minimum z (use 0 or smaller than you want for good interpolation)
* **zmax** – maximum z (use larger than you want for good interpolation)
* **nz_step** – number of steps to sample in z (default max allowed is 100)
* **zs** – instead of zmin,zmax, nz_step, can specific explicit array of z values to spline from
* **kmax** – maximum k
* **nonlinear** – include non-linear correction from halo model
* **var1** – variable i (index, or name of variable; default delta_tot)
* **var2** – variable j (index, or name of variable; default delta_tot)
* **hubble_units** – if true, output power spectrum in $({\rm Mpc}/h)^{3}$ units,
otherwise ${\rm Mpc}^{3}$
* **k_hunit** – if true, matter power is a function of k/h, if false, just k (both ${\rm Mpc}^{-1}$ units)
* **return_z_k** – if true, return interpolator, z, k where z, k are the grid used
* **k_per_logint** – specific uniform sampling over log k (if not set, uses optimized irregular sampling)
* **log_interp** – if true, interpolate log of power spectrum (unless any values are negative in which case ignored)
* **extrap_kmax** – if set, use power law extrapolation beyond kmax to extrap_kmax (useful for tails of integrals)
* **Returns:**
An object PK based on [`RectBivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html#scipy.interpolate.RectBivariateSpline), that can be called
with PK.P(z,kh) or PK(z,log(kh)) to get log matter power values.
If return_z_k=True, instead return interpolator, z, k where z, k are the grid used.
### camb.get_results(params)
Calculate results for specified parameters and return [`CAMBdata`](results.md#camb.results.CAMBdata) instance for getting results.
* **Parameters:**
**params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
* **Returns:**
[`CAMBdata`](results.md#camb.results.CAMBdata) instance
### camb.get_transfer_functions(params, only_time_sources=False)
Calculate transfer functions for specified parameters and return [`CAMBdata`](results.md#camb.results.CAMBdata) instance for
getting results and subsequently calculating power spectra.
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
* **only_time_sources** – does not calculate the CMB l,k transfer functions and does not apply any non-linear
correction scaling. Results with only_time_sources=True can therefore be used with
different initial power spectra to get consistent non-linear lensed spectra.
* **Returns:**
[`CAMBdata`](results.md#camb.results.CAMBdata) instance
### camb.get_valid_numerical_params(transfer_only=False, \*\*class_names)
Get numerical parameter names that are valid input to [`set_params()`](#camb.set_params)
* **Parameters:**
* **transfer_only** – if True, exclude parameters that affect only initial power spectrum or non-linear model
* **class_names** – class name parameters that will be used by [`model.CAMBparams.set_classes()`](model.md#camb.model.CAMBparams.set_classes)
* **Returns:**
set of valid input parameter names for [`set_params()`](#camb.set_params)
### camb.get_zre_from_tau(params, tau)
Get reionization redshift given optical depth tau
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
* **tau** – optical depth
* **Returns:**
reionization redshift (or negative number if error)
### camb.read_ini(ini_filename, no_validate=False)
Get a [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance using parameter specified in a .ini parameter file.
* **Parameters:**
* **ini_filename** – path of the .ini file to read, or a full URL to download from
* **no_validate** – do not pre-validate the ini file (faster, but may crash kernel if error)
* **Returns:**
[`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
### camb.run_ini(ini_filename, no_validate=False)
Run the command line camb from a .ini file (producing text files as with the command line program).
This does the same as the command line program, except global config parameters are not read and set (which does not
change results in almost all cases).
* **Parameters:**
* **ini_filename** – .ini file to use
* **no_validate** – do not pre-validate the ini file (faster, but may crash kernel if error)
### camb.set_feedback_level(level=1)
Set the feedback level for internal CAMB calls
* **Parameters:**
**level** – zero for nothing, >1 for more
### camb.set_params(cp=None, verbose=False, \*\*params)
Set all CAMB parameters at once, including parameters which are part of the
CAMBparams structure, as well as global parameters.
E.g.:
```default
cp = camb.set_params(
ns=1,
H0=67,
ombh2=0.022,
omch2=0.1,
w=-0.95,
Alens=1.2,
lmax=2000,
WantTransfer=True,
dark_energy_model="DarkEnergyPPF",
)
```
This is equivalent to:
```default
cp = model.CAMBparams()
cp.DarkEnergy = DarkEnergyPPF()
cp.DarkEnergy.set_params(w=-0.95)
cp.set_cosmology(H0=67, omch2=0.1, ombh2=0.022, Alens=1.2)
cp.set_for_lmax(lmax=2000)
cp.InitPower.set_params(ns=1)
cp.WantTransfer = True
```
The wrapped functions are (in this order):
* [`model.CAMBparams.set_accuracy()`](model.md#camb.model.CAMBparams.set_accuracy)
* [`model.CAMBparams.set_classes()`](model.md#camb.model.CAMBparams.set_classes)
* [`dark_energy.DarkEnergyEqnOfState.set_params()`](dark_energy.md#camb.dark_energy.DarkEnergyEqnOfState.set_params) (or equivalent if a different dark energy model class used)
* [`reionization.TanhReionization.set_extra_params()`](reionization.md#camb.reionization.TanhReionization.set_extra_params) (or equivalent if a different reionization class used)
* [`model.CAMBparams.set_cosmology()`](model.md#camb.model.CAMBparams.set_cosmology)
* [`model.CAMBparams.set_matter_power()`](model.md#camb.model.CAMBparams.set_matter_power)
* [`model.CAMBparams.set_for_lmax()`](model.md#camb.model.CAMBparams.set_for_lmax)
* [`initialpower.InitialPowerLaw.set_params()`](initialpower.md#camb.initialpower.InitialPowerLaw.set_params) (or equivalent if a different initial power model class used)
* [`nonlinear.Halofit.set_params()`](nonlinear.md#camb.nonlinear.Halofit.set_params)
* **Parameters:**
* **params** – the values of the parameters
* **cp** – use this CAMBparams instead of creating a new one
* **verbose** – print out the equivalent set of commands
* **Returns:**
[`model.CAMBparams`](model.md#camb.model.CAMBparams) instance
### camb.set_params_cosmomc(p, num_massive_neutrinos=1, neutrino_hierarchy='degenerate', halofit_version='mead', dark_energy_model='ppf', lmax=2500, lens_potential_accuracy=1, inpars=None)
get CAMBParams for dictionary of cosmomc-named parameters assuming Planck 2018 defaults
* **Parameters:**
* **p** – dictionary of cosmomc parameters (e.g. from getdist.types.BestFit’s getParamDict() function)
* **num_massive_neutrinos** – usually 1 if fixed mnu=0.06 eV, three if mnu varying
* **neutrino_hierarchy** – hierarchy
* **halofit_version** – name of the specific Halofit model to use for non-linear modelling
* **dark_energy_model** – ppf or fluid dark energy model
* **lmax** – lmax for accuracy settings
* **lens_potential_accuracy** – lensing accuracy parameter
* **inpars** – optional input CAMBParams to set
* **Returns:**
## https://camb.readthedocs.io/en/latest/model.html
# Input parameter model
### *class* camb.model.CAMBparams(\*args, \*\*kwargs)
Object storing the parameters for a CAMB calculation, including cosmological parameters and
settings for what to calculate. When a new object is instantiated, default parameters are set automatically.
To add a new parameter, add it to the CAMBparams type in model.f90, then edit the \_fields_ list in the CAMBparams
class in model.py to add the new parameter in the corresponding location of the member list. After rebuilding the
python version you can then access the parameter by using params.new_parameter_name where params is a CAMBparams
instance. You could also modify the wrapper functions to set the field value less directly.
You can view the set of underlying parameters used by the Fortran code by printing the CAMBparams instance.
In python, to set cosmology parameters it is usually best to use [`set_cosmology()`](#camb.model.CAMBparams.set_cosmology) and
equivalent methods for most other parameters. Alternatively the convenience function [`camb.set_params()`](camb.md#camb.set_params)
can construct a complete instance from a dictionary of relevant parameters.
You can also save and restore a CAMBparams instance using the repr and eval functions, or pickle it.
* **Variables:**
* **WantCls** – (*boolean*) Calculate C_L
* **WantTransfer** – (*boolean*) Calculate matter transfer functions and matter power spectrum
* **WantScalars** – (*boolean*) Calculates scalar modes
* **WantTensors** – (*boolean*) Calculate tensor modes
* **WantVectors** – (*boolean*) Calculate vector modes
* **WantDerivedParameters** – (*boolean*) Calculate derived parameters
* **Want_cl_2D_array** – (*boolean*) For the C_L, include NxN matrix of all possible cross-spectra between sources
* **Want_CMB** – (*boolean*) Calculate the temperature and polarization power spectra
* **Want_CMB_lensing** – (*boolean*) Calculate the lensing potential power spectrum
* **DoLensing** – (*boolean*) Include CMB lensing
* **NonLinear** – (integer/string, one of: NonLinear_none, NonLinear_pk, NonLinear_lens, NonLinear_both)
* **Transfer** – [`camb.model.TransferParams`](#camb.model.TransferParams)
* **want_zstar** – (*boolean*)
* **want_zdrag** – (*boolean*)
* **min_l** – (*integer*) l_min for the scalar C_L (1 or 2, L=1 dipoles are Newtonian Gauge)
* **max_l** – (*integer*) l_max for the scalar C_L
* **max_l_tensor** – (*integer*) l_max for the tensor C_L
* **max_eta_k** – (*float64*) Maximum k\*eta_0 for scalar C_L, where eta_0 is the conformal time today
* **max_eta_k_tensor** – (*float64*) Maximum k\*eta_0 for tensor C_L, where eta_0 is the conformal time today
* **ombh2** – (*float64*) Omega_baryon h^2
* **omch2** – (*float64*) Omega_cdm h^2
* **omk** – (*float64*) Omega_K
* **omnuh2** – (*float64*) Omega_massive_neutrino h^2
* **H0** – (*float64*) Hubble parameter is km/s/Mpc units
* **TCMB** – (*float64*) CMB temperature today in Kelvin
* **YHe** – (*float64*) Helium mass fraction
* **num_nu_massless** – (*float64*) Effective number of massless neutrinos
* **num_nu_massive** – (*integer*) Total physical (integer) number of massive neutrino species
* **nu_mass_eigenstates** – (*integer*) Number of non-degenerate mass eigenstates
* **share_delta_neff** – (*boolean*) Share the non-integer part of num_nu_massless between the eigenstates. This is not needed or used in the python interface.
* **nu_mass_degeneracies** – (*float64 array*) Degeneracy of each distinct eigenstate
* **nu_mass_fractions** – (*float64 array*) Mass fraction in each distinct eigenstate
* **nu_mass_numbers** – (*integer array*) Number of physical neutrinos per distinct eigenstate
* **InitPower** – [`camb.initialpower.InitialPower`](initialpower.md#camb.initialpower.InitialPower)
* **Recomb** – [`camb.recombination.RecombinationModel`](recombination.md#camb.recombination.RecombinationModel)
* **Reion** – [`camb.reionization.ReionizationModel`](reionization.md#camb.reionization.ReionizationModel)
* **DarkEnergy** – [`camb.dark_energy.DarkEnergyModel`](dark_energy.md#camb.dark_energy.DarkEnergyModel)
* **NonLinearModel** – [`camb.nonlinear.NonLinearModel`](nonlinear.md#camb.nonlinear.NonLinearModel)
* **Accuracy** – [`camb.model.AccuracyParams`](#camb.model.AccuracyParams)
* **SourceTerms** – [`camb.model.SourceTermParams`](#camb.model.SourceTermParams)
* **z_outputs** – (*float64 array*) redshifts to always calculate BAO output parameters
* **scalar_initial_condition** – (integer/string, one of: initial_vector, initial_adiabatic, initial_iso_CDM, initial_iso_baryon, initial_iso_neutrino, initial_iso_neutrino_vel)
* **InitialConditionVector** – (*float64 array*) if scalar_initial_condition is initial_vector, the vector of initial condition amplitudes
* **OutputNormalization** – (*integer*) If non-zero, multipole to normalize the C_L at
* **Alens** – (*float64*) non-physical scaling amplitude for the CMB lensing spectrum power
* **MassiveNuMethod** – (integer/string, one of: Nu_int, Nu_trunc, Nu_approx, Nu_best)
* **DoLateRadTruncation** – (*boolean*) If true, use smooth approx to radiation perturbations after decoupling on small scales, saving evolution of irrelevant oscillatory multipole equations
* **Evolve_baryon_cs** – (*boolean*) Evolve a separate equation for the baryon sound speed rather than using background approximation
* **Evolve_delta_xe** – (*boolean*) Evolve ionization fraction perturbations
* **Evolve_delta_Ts** – (*boolean*) Evolve the spin temperature perturbation (for 21cm)
* **Do21cm** – (*boolean*) 21cm is not yet implemented via the python wrapper
* **transfer_21cm_cl** – (*boolean*) Get 21cm C_L at a given fixed redshift
* **Log_lvalues** – (*boolean*) Use log spacing for sampling in L
* **use_cl_spline_template** – (*boolean*) When interpolating use a fiducial spectrum shape to define ratio to spline
* **min_l_logl_sampling** – (*integer*) Minimum L to use log sampling for L
* **SourceWindows** – array of [`camb.sources.SourceWindow`](sources.md#camb.sources.SourceWindow)
* **CustomSources** – [`camb.model.CustomSources`](#camb.model.CustomSources)
#### *property* N_eff
* **Returns:**
Effective number of degrees of freedom in relativistic species at early times.
#### copy()
Make an independent copy of this object.
* **Returns:**
a deep copy of self
#### *classmethod* dict(state)
Make an instance of the class from a dictionary of field values (used to restore from repr)
* **Parameters:**
**state** – dictionary of values
* **Returns:**
new instance
#### diff(params)
Print differences between this set of parameters and params
* **Parameters:**
**params** – another CAMBparams instance
#### get_DH(ombh2=None, delta_neff=None)
Get deuterium ration D/H by interpolation using the
[`bbn.BBNPredictor`](bbn.md#camb.bbn.BBNPredictor) instance passed to [`set_cosmology()`](#camb.model.CAMBparams.set_cosmology)
(or the default one, if Y_He has not been set).
* **Parameters:**
* **ombh2** – $\Omega_b h^2$ (default: value passed to [`set_cosmology()`](#camb.model.CAMBparams.set_cosmology))
* **delta_neff** – additional $N_{\rm eff}$ relative to standard value (of 3.044)
(default: from values passed to [`set_cosmology()`](#camb.model.CAMBparams.set_cosmology))
* **Returns:**
BBN helium nucleon fraction D/H
#### get_Y_p(ombh2=None, delta_neff=None)
Get BBN helium nucleon fraction (NOT the same as the mass fraction Y_He) by interpolation using the
[`bbn.BBNPredictor`](bbn.md#camb.bbn.BBNPredictor) instance passed to [`set_cosmology()`](#camb.model.CAMBparams.set_cosmology)
(or the default one, if Y_He has not been set).
* **Parameters:**
* **ombh2** – $\Omega_b h^2$ (default: value passed to [`set_cosmology()`](#camb.model.CAMBparams.set_cosmology))
* **delta_neff** – additional $N_{\rm eff}$ relative to standard value (of 3.044)
(default: from values passed to [`set_cosmology()`](#camb.model.CAMBparams.set_cosmology))
* **Returns:**
$Y_p^{\rm BBN}$ helium nucleon fraction predicted by BBN.
#### replace(instance)
Replace the content of this class with another instance, doing a deep copy (in Fortran)
* **Parameters:**
**instance** – instance of the same class to replace this instance with
#### scalar_power(k: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### scalar_power(k: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get the primordial scalar curvature power spectrum at $k$
* **Parameters:**
**k** – wavenumber $k$ (in ${\rm Mpc}^{-1}$ units)
* **Returns:**
power spectrum at $k$
#### set_H0_for_theta(theta, cosmomc_approx=False, theta_H0_range=(10, 100), est_H0=67.0, iteration_threshold=8, setter_H0=None)
Set H0 to give a specified value of the acoustic angular scale parameter theta.
* **Parameters:**
* **theta** – value of $r_s/D_M$ at redshift $z_\star$
* **cosmomc_approx** – if true, use approximate fitting formula for $z_\star$,
if false do full numerical calculation
* **theta_H0_range** – min, max interval to search for H0 (in km/s/Mpc)
* **est_H0** – an initial guess for H0 in km/s/Mpc, used in the case cosmomc_approx=False.
* **iteration_threshold** – difference in H0 from est_H0 for which to iterate,
used for cosmomc_approx=False to correct for small changes in zstar when H0 changes
* **setter_H0** – if specified, a function to call to set H0 for each iteration to find thetstar. It should be
a function(pars: CAMBParams, H0: float). Not normally needed, but can be used e.g. when DE model needs to be
changed for each H0 because it depends explicitly on e.g. Omega_m.
#### set_accuracy(AccuracyBoost=1.0, lSampleBoost=1.0, lAccuracyBoost=1.0, DoLateRadTruncation=True, min_l_logl_sampling=None)
Set parameters determining overall calculation accuracy (large values may give big slow down).
For finer control you can set individual accuracy parameters by changing CAMBParams.Accuracy
([`model.AccuracyParams`](#camb.model.AccuracyParams)) .
* **Parameters:**
* **AccuracyBoost** – increase AccuracyBoost to decrease integration step size, increase density of k
sampling, etc.
* **lSampleBoost** – increase lSampleBoost to increase density of L sampling for CMB
* **lAccuracyBoost** – increase lAccuracyBoost to increase the maximum L included in the Boltzmann hierarchies
* **DoLateRadTruncation** – If True, use approximation to radiation perturbation evolution at late times
* **min_l_logl_sampling** – at L>min_l_logl_sampling uses sparser log sampling for L interpolation;
increase above 5000 for better accuracy at L > 5000
* **Returns:**
self
#### set_classes(dark_energy_model=None, initial_power_model=None, non_linear_model=None, recombination_model=None, reionization_model=None)
Change the classes used to implement parts of the model.
* **Parameters:**
* **dark_energy_model** – ‘fluid’, ‘ppf’, or name of a DarkEnergyModel class
* **initial_power_model** – name of an InitialPower class
* **non_linear_model** – name of a NonLinearModel class
* **recombination_model** – name of RecombinationModel class
* **reionization_model** – name of a ReionizationModel class
#### set_cosmology(H0: [float](https://docs.python.org/3/library/functions.html#float) | [None](https://docs.python.org/3/library/constants.html#None) = None, ombh2=0.022, omch2=0.12, omk=0.0, cosmomc_theta: [float](https://docs.python.org/3/library/functions.html#float) | [None](https://docs.python.org/3/library/constants.html#None) = None, thetastar: [float](https://docs.python.org/3/library/functions.html#float) | [None](https://docs.python.org/3/library/constants.html#None) = None, neutrino_hierarchy: [str](https://docs.python.org/3/library/stdtypes.html#str) | [int](https://docs.python.org/3/library/functions.html#int) = 'degenerate', num_massive_neutrinos=1, mnu=0.06, nnu=3.044, YHe: [float](https://docs.python.org/3/library/functions.html#float) | [None](https://docs.python.org/3/library/constants.html#None) = None, meffsterile=0.0, standard_neutrino_neff=3.044, TCMB=2.7255, tau: [float](https://docs.python.org/3/library/functions.html#float) | [None](https://docs.python.org/3/library/constants.html#None) = None, zrei: [float](https://docs.python.org/3/library/functions.html#float) | [None](https://docs.python.org/3/library/constants.html#None) = None, Alens=1.0, bbn_predictor: [None](https://docs.python.org/3/library/constants.html#None) | [str](https://docs.python.org/3/library/stdtypes.html#str) | [BBNPredictor](bbn.md#camb.bbn.BBNPredictor) = None, theta_H0_range=(10, 100), setter_H0=None)
Sets cosmological parameters in terms of physical densities and parameters (e.g. as used in Planck analyses).
Default settings give a single distinct neutrino mass eigenstate, by default one neutrino with mnu = 0.06eV.
Set the neutrino_hierarchy parameter to normal or inverted to use a two-eigenstate model that is a good
approximation to the known mass splittings seen in oscillation measurements.
For more fine-grained control can set the neutrino parameters directly rather than using this function.
Instead of setting the Hubble parameter directly, you can instead set the acoustic scale parameter
(cosmomc_theta, which is based on a fitting formula for simple models, or thetastar, which is numerically
calculated more generally). Note that you must have already set the dark energy model, you can’t use
set_cosmology with theta and then change the background evolution (which would change theta at the calculated
H0 value). Likewise, the dark energy model cannot depend explicitly on H0 unless you provide a custom
setter_H0 function to update the model for each H0 iteration used to search for thetastar.
If in doubt, print CAMBparams after setting parameters to see the underlying values that have been set.
* **Parameters:**
* **H0** – Hubble parameter today in km/s/Mpc. Can leave unset and instead set thetastar or cosmomc_theta
(which solves for the required H0).
* **ombh2** – physical density in baryons
* **omch2** – physical density in cold dark matter
* **omk** – Omega_K curvature parameter
* **cosmomc_theta** – The approximate CosmoMC theta parameter $\theta_{\rm MC}$. The angular
diameter distance is calculated numerically, but the redshift $z_\star$
is calculated using an approximate (quite accurate but non-general) fitting formula.
Leave unset to use H0 or thetastar.
* **thetastar** – The angular acoustic scale parameter $\theta_\star = r_s(z_*)/D_M(z_*)$, defined as
the ratio of the photon-baryon sound horizon $r_s$ to the angular diameter
distance $D_M$, where both quantities are evaluated at $z_*$, the redshift at
which the optical depth (excluding reionization) is unity. Leave unset to use H0 or cosmomc_theta.
* **neutrino_hierarchy** – ‘degenerate’, ‘normal’, or ‘inverted’ (1 or 2 eigenstate approximation)
* **num_massive_neutrinos** – number of massive neutrinos. If meffsterile is set, this is the number of
massive active neutrinos.
* **mnu** – sum of neutrino masses (in eV). Omega_nu is calculated approximately from this assuming neutrinos
non-relativistic today; i.e. here is defined as a direct proxy for Omega_nu. Internally the actual
physical mass is calculated from the Omega_nu accounting for small mass-dependent velocity corrections
but neglecting spectral distortions to the neutrino distribution.
Set the neutrino field values directly if you need finer control or more complex neutrino models.
* **nnu** – N_eff, effective relativistic degrees of freedom
* **YHe** – Helium mass fraction. If None, set from BBN consistency.
* **meffsterile** – effective mass of sterile neutrinos (set along with nnu greater than the standard value).
Defined as in the Planck papers. You do not need to also change num_massive_neutrinos.
* **standard_neutrino_neff** – default value for N_eff in standard cosmology (non-integer to allow for partial
heating of neutrinos at electron-positron annihilation and QED effects)
* **TCMB** – CMB temperature (in Kelvin)
* **tau** – optical depth; if None and zrei is None, current Reion settings are not changed
* **zrei** – reionization mid-point optical depth (set tau=None to use this)
* **Alens** – (non-physical) scaling of the lensing potential compared to prediction
* **bbn_predictor** – [`bbn.BBNPredictor`](bbn.md#camb.bbn.BBNPredictor) instance used to get YHe from BBN consistency if YHe is None,
or name of a BBN predictor class, or file name of an interpolation table
* **theta_H0_range** – if thetastar or cosmomc_theta is specified, the min, max interval of H0 values to map to;
if H0 is outside this range it will raise an exception.
* **setter_H0** – if specified, a function to call to set H0 for each iteration to find thetastar. It should be
a function(pars: CAMBParams, H0: float). Not normally needed, but can be used e.g. when DE model needs to be
changed for each H0 because it depends explicitly on H0
#### set_custom_scalar_sources(custom_sources, source_names=None, source_ell_scales=None, frame='CDM', code_path=None)
Set custom sources for angular power spectrum using camb.symbolic sympy expressions.
* **Parameters:**
* **custom_sources** – list of sympy expressions for the angular power spectrum sources
* **source_names** – optional list of string names for the sources
* **source_ell_scales** – list or dictionary of scalings for each source name, where for integer entry n,
the source for multipole $\ell$ is scaled by $\sqrt{(\ell+n)!/(\ell-n)!}$,
i.e. $n=2$ for a new polarization-like source.
* **frame** – if the source is not gauge invariant, frame in which to interpret result
* **code_path** – optional path for output of source code for CAMB f90 source function
#### set_dark_energy(w=-1.0, cs2=1.0, wa=0, dark_energy_model='fluid')
Set dark energy parameters (use set_dark_energy_w_a to set w(a) from numerical table instead)
To use a custom dark energy model, assign the class instance to the DarkEnergy field instead.
* **Parameters:**
* **w** – $w\equiv p_{\rm de}/\rho_{\rm de}$, assumed constant
* **wa** – evolution of w (for dark_energy_model=ppf)
* **cs2** – rest-frame sound speed squared of dark energy fluid
* **dark_energy_model** – model to use (‘fluid’ or ‘ppf’), default is ‘fluid’
* **Returns:**
self
#### set_dark_energy_w_a(a, w, dark_energy_model='fluid')
Set the dark energy equation of state from tabulated values (which are cubic spline interpolated).
* **Parameters:**
* **a** – array of sampled a = 1/(1+z) values
* **w** – array of w(a)
* **dark_energy_model** – model to use (‘fluid’ or ‘ppf’), default is ‘fluid’
* **Returns:**
self
#### set_for_lmax(lmax, max_eta_k=None, lens_potential_accuracy=0, lens_margin=150, k_eta_fac=2.5, lens_k_eta_reference=18000.0, nonlinear=None)
Set parameters to get CMB power spectra accurate to specific a l_lmax.
Note this does not fix the actual output L range, spectra may be calculated above l_max
(but may not be accurate there). To fix the l_max for output arrays use the optional input argument
to [`results.CAMBdata.get_cmb_power_spectra()`](results.md#camb.results.CAMBdata.get_cmb_power_spectra) etc.
* **Parameters:**
* **lmax** – $\ell_{\rm max}$ you want
* **max_eta_k** – maximum value of $k \eta_0\approx k\chi_*$ to use, which indirectly sets k_max.
If None, sensible value set automatically.
* **lens_potential_accuracy** – Set to 1 or higher if you want to get the lensing potential accurate
(1 is only Planck-level accuracy)
* **lens_margin** – the $\Delta \ell_{\rm max}$ to use to ensure lensed $C_\ell$ are correct
at $\ell_{\rm max}$
* **k_eta_fac** – k_eta_fac default factor for setting max_eta_k = k_eta_fac\*lmax if max_eta_k=None
* **lens_k_eta_reference** – value of max_eta_k to use when lens_potential_accuracy>0; use
k_eta_max = lens_k_eta_reference\*lens_potential_accuracy
* **nonlinear** – use non-linear power spectrum; if None, sets nonlinear if lens_potential_accuracy>0 otherwise
preserves current setting
* **Returns:**
self
#### set_initial_power(initial_power_params)
Set the InitialPower primordial power spectrum parameters
* **Parameters:**
**initial_power_params** – [`initialpower.InitialPowerLaw`](initialpower.md#camb.initialpower.InitialPowerLaw)
or [`initialpower.SplinedInitialPower`](initialpower.md#camb.initialpower.SplinedInitialPower) instance
* **Returns:**
self
#### set_initial_power_function(P_scalar, P_tensor=None, kmin=1e-06, kmax=100.0, N_min=200, rtol=5e-05, effective_ns_for_nonlinear=None, args=())
Set the initial power spectrum from a function P_scalar(k, \*args), and optionally also the tensor spectrum.
The function is called to make a pre-computed array which is then interpolated inside CAMB. The sampling in k
is set automatically so that the spline is accurate, but you may also need to increase other
accuracy parameters.
* **Parameters:**
* **P_scalar** – function returning normalized initial scalar curvature power as function of k (in Mpc^{-1})
* **P_tensor** – optional function returning normalized initial tensor power spectrum
* **kmin** – minimum wavenumber to compute
* **kmax** – maximum wavenumber to compute
* **N_min** – minimum number of spline points for the pre-computation
* **rtol** – relative tolerance for deciding how many points are enough
* **effective_ns_for_nonlinear** – an effective n_s for use with approximate non-linear corrections
* **args** – optional list of arguments passed to P_scalar (and P_tensor)
* **Returns:**
self
#### set_initial_power_table(k, pk=None, pk_tensor=None, effective_ns_for_nonlinear=None)
Set a general initial power spectrum from tabulated values. It’s up to you to ensure the sampling
of the k values is high enough that it can be interpolated accurately.
* **Parameters:**
* **k** – array of k values (Mpc^{-1})
* **pk** – array of primordial curvature perturbation power spectrum values P(k_i)
* **pk_tensor** – array of tensor spectrum values
* **effective_ns_for_nonlinear** – an effective n_s for use with approximate non-linear corrections
#### set_matter_power(redshifts=(0.0,), kmax=1.2, k_per_logint=None, nonlinear=None, accurate_massive_neutrino_transfers=False, silent=False)
Set parameters for calculating matter power spectra and transfer functions.
* **Parameters:**
* **redshifts** – array of redshifts to calculate
* **kmax** – maximum k to calculate (where k is just k, not k/h)
* **k_per_logint** – minimum number of k steps per log k. Set to zero to use default optimized spacing.
* **nonlinear** – if None, uses existing setting, otherwise boolean for whether to use non-linear matter power.
* **accurate_massive_neutrino_transfers** – if you want the massive neutrino transfers accurately
* **silent** – if True, don’t give warnings about sort order
* **Returns:**
self
#### set_nonlinear_lensing(nonlinear)
Settings for whether or not to use non-linear corrections for the CMB lensing potential.
Note that set_for_lmax also sets lensing to be non-linear if lens_potential_accuracy>0
* **Parameters:**
**nonlinear** – true to use non-linear corrections
#### tensor_power(k: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### tensor_power(k: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get the primordial tensor curvature power spectrum at $k$
* **Parameters:**
**k** – wavenumber $k$ (in ${\rm Mpc}^{-1}$ units)
* **Returns:**
tensor power spectrum at $k$
#### validate()
Do some quick tests for sanity
* **Returns:**
True if OK
### *class* camb.model.AccuracyParams
Structure with parameters governing numerical accuracy. AccuracyBoost will also scale almost all the other
parameters except for lSampleBoost (which is specific to the output interpolation) and lAccuracyBoost
(which is specific to the multipole hierarchy evolution), e.g. setting AccuracyBoost=2, IntTolBoost=1.5, means
that internally the k sampling for integration will be boosted by AccuracyBoost\*IntTolBoost = 3.
Not intended to be separately instantiated, only used as part of CAMBparams.
If you want to set fields with [`camb.set_params()`](camb.md#camb.set_params), use ‘Accuracy.xxx’:yyy in the parameter dictionary.
* **Variables:**
* **AccuracyBoost** – (*float64*) general accuracy setting effecting everything related to step sizes etc. (including separate settings below except the next two)
* **lSampleBoost** – (*float64*) accuracy for sampling in ell for interpolation for the C_l (if >=50, all ell are calculated)
* **lAccuracyBoost** – (*float64*) Boosts number of multipoles integrated in Boltzmann hierarchy
* **AccuratePolarization** – (*boolean*) Do you care about the accuracy of the polarization Cls?
* **AccurateBB** – (*boolean*) Do you care about BB accuracy (e.g. in lensing)
* **AccurateReionization** – (*boolean*) Do you care about percent level accuracy on EE signal from reionization?
* **TimeStepBoost** – (*float64*) Sampling time steps
* **BackgroundTimeStepBoost** – (*float64*) Number of time steps for background thermal history and source window interpolation
* **IntTolBoost** – (*float64*) Tolerances for integrating differential equations
* **SourcekAccuracyBoost** – (*float64*) Accuracy of k sampling for source time integration
* **IntkAccuracyBoost** – (*float64*) Accuracy of k sampling for integration
* **TransferkBoost** – (*float64*) Accuracy of k sampling for transfer functions
* **NonFlatIntAccuracyBoost** – (*float64*) Accuracy of non-flat time integration
* **BessIntBoost** – (*float64*) Accuracy of bessel integration truncation
* **LensingBoost** – (*float64*) Accuracy of the lensing of CMB power spectra
* **NonlinSourceBoost** – (*float64*) Accuracy of steps and kmax used for the non-linear correction
* **BesselBoost** – (*float64*) Accuracy of bessel pre-computation sampling
* **LimberBoost** – (*float64*) Accuracy of Limber approximation use
* **SourceLimberBoost** – (*float64*) Scales when to switch to Limber for source windows
* **KmaxBoost** – (*float64*) Boost max k for source window functions
* **neutrino_q_boost** – (*float64*) Number of momenta integrated for neutrino perturbations
### *class* camb.model.TransferParams
Object storing parameters for the matter power spectrum calculation.
Not intended to be separately instantiated, only used as part of CAMBparams.
* **Variables:**
* **high_precision** – (*boolean*) True for more accuracy
* **accurate_massive_neutrinos** – (*boolean*) True if you want neutrino transfer functions accurate (false by default)
* **kmax** – (*float64*) k_max to output (no h in units)
* **k_per_logint** – (*integer*) number of points per log k interval. If zero, set an irregular optimized spacing
* **PK_num_redshifts** – (*integer*) number of redshifts to calculate
* **PK_redshifts** – (*float64 array*) redshifts to output for the matter transfer and power
### *class* camb.model.SourceTermParams
Structure with parameters determining how galaxy/lensing/21cm power spectra and transfer functions are calculated.
Not intended to be separately instantiated, only used as part of CAMBparams.
* **Variables:**
* **limber_windows** – (*boolean*) Use Limber approximation where appropriate. CMB lensing uses Limber even if limber_window is false, but method is changed to be consistent with other sources if limber_windows is true
* **limber_phi_lmin** – (*integer*) When limber_windows=True, the minimum L to use Limber approximation for the lensing potential and other sources (which may use higher but not lower)
* **counts_density** – (*boolean*) Include the density perturbation source
* **counts_redshift** – (*boolean*) Include redshift distortions
* **counts_lensing** – (*boolean*) Include magnification bias for number counts
* **counts_velocity** – (*boolean*) Non-redshift distortion velocity terms
* **counts_radial** – (*boolean*) Radial displacement velocity term; does not include time delay; subset of counts_velocity, just 1 / (chi \* H) term
* **counts_timedelay** – (*boolean*) Include time delay terms \* 1 / (H \* chi)
* **counts_ISW** – (*boolean*) Include tiny ISW terms
* **counts_potential** – (*boolean*) Include tiny terms in potentials at source
* **counts_evolve** – (*boolean*) Account for source evolution
* **line_phot_dipole** – (*boolean*) Dipole sources for 21cm
* **line_phot_quadrupole** – (*boolean*) Quadrupole sources for 21cm
* **line_basic** – (*boolean*) Include main 21cm monopole density/spin temperature sources
* **line_distortions** – (*boolean*) Redshift distortions for 21cm
* **line_extra** – (*boolean*) Include other sources
* **line_reionization** – (*boolean*) Replace the E modes with 21cm polarization
* **use_21cm_mK** – (*boolean*) Use mK units for 21cm
### *class* camb.model.CustomSources
Structure containing symbolic-compiled custom CMB angular power spectrum source functions.
Don’t change this directly, instead call [`model.CAMBparams.set_custom_scalar_sources()`](#camb.model.CAMBparams.set_custom_scalar_sources).
* **Variables:**
* **num_custom_sources** – (*integer*) number of sources set
* **c_source_func** – (*pointer*) Don’t directly change this
* **custom_source_ell_scales** – (*integer array*) scaling in L for outputs
## https://camb.readthedocs.io/en/latest/results.html
# Calculation results
IMPORTANT:
CAMB returns power spectra $C_\ell$ as $D_\ell = \ell(\ell+1)C_\ell/2\pi$ by default (unless `raw_cl=True`).
The CMB lensing spectrun is returned by default as $[\ell(\ell+1)]^2C_\ell^\phi/2\pi = 4 C^\kappa/2\pi$.
### *class* camb.results.CAMBdata(\*args, \*\*kwargs)
An object for storing calculational data, parameters and transfer functions.
Results for a set of parameters (given in a [`CAMBparams`](model.md#camb.model.CAMBparams) instance)
are returned by the [`camb.get_background()`](camb.md#camb.get_background), [`camb.get_transfer_functions()`](camb.md#camb.get_transfer_functions) or [`camb.get_results()`](camb.md#camb.get_results)
functions. Exactly which quantities are already calculated depends on which of these functions you use and the
input parameters.
To quickly make a fully calculated CAMBdata instance for a set of parameters you can call [`camb.get_results()`](camb.md#camb.get_results).
* **Variables:**
* **Params** – [`camb.model.CAMBparams`](model.md#camb.model.CAMBparams)
* **ThermoDerivedParams** – (*float64 array*) array of derived parameters, see `results.CAMBdata.get_derived_params()` to get as a dictionary
* **flat** – (*boolean*) flat universe
* **closed** – (*boolean*) closed universe
* **grhocrit** – (*float64*) kappa\*a^2\*rho_c(0)/c^2 with units of Mpc\*\*(-2)
* **grhog** – (*float64*) kappa/c^2\*4\*sigma_B/c^3 T_CMB^4
* **grhor** – (*float64*) 7/8\*(4/11)^(4/3)\*grhog (per massless neutrino species)
* **grhob** – (*float64*) baryon contribution
* **grhoc** – (*float64*) CDM contribution
* **grhov** – (*float64*) Dark energy contribution
* **grhornomass** – (*float64*) grhor\*number of massless neutrino species
* **grhok** – (*float64*) curvature contribution to critical density
* **taurst** – (*float64*) time at start of recombination
* **dtaurec** – (*float64*) time step in recombination
* **taurend** – (*float64*) time at end of recombination
* **tau_maxvis** – (*float64*) time at peak visibility
* **adotrad** – (*float64*) da/d tau in early radiation-dominated era
* **omega_de** – (*float64*) Omega for dark energy today
* **curv** – (*float64*) curvature K
* **curvature_radius** – (*float64*) $1/\sqrt{|K|}$
* **Ksign** – (*float64*) Ksign = 1,0 or -1
* **tau0** – (*float64*) conformal time today
* **chi0** – (*float64*) comoving angular diameter distance of big bang; rofChi(tau0/curvature_radius)
* **scale** – (*float64*) relative to flat. e.g. for scaling L sampling
* **akthom** – (*float64*) sigma_T \* (number density of protons now)
* **fHe** – (*float64*) n_He_tot / n_H_tot
* **Nnow** – (*float64*) number density today
* **z_eq** – (*float64*) matter-radiation equality redshift assuming all neutrinos relativistic
* **grhormass** – (*float64 array*)
* **nu_masses** – (*float64 array*)
* **num_transfer_redshifts** – (*integer*) Number of calculated redshift outputs for the matter transfer (including those for CMB lensing)
* **transfer_redshifts** – (*float64 array*) Calculated output redshifts
* **PK_redshifts_index** – (*integer array*) Indices of the requested PK_redshifts
* **OnlyTransfers** – (*boolean*) Only calculating transfer functions, not power spectra
* **HasScalarTimeSources** – (*boolean*) calculate and save time source functions, not power spectra
#### angular_diameter_distance(z: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### angular_diameter_distance(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get (non-comoving) angular diameter distance to redshift z.
Must have called [`calc_background()`](#camb.results.CAMBdata.calc_background), [`calc_background_no_thermo()`](#camb.results.CAMBdata.calc_background_no_thermo) or calculated transfer
functions or power spectra.
* **Parameters:**
**z** – redshift or array of redshifts
* **Returns:**
angular diameter distances, matching rank of z
#### angular_diameter_distance2(z1: [float](https://docs.python.org/3/library/functions.html#float), z2: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### angular_diameter_distance2(z1: [float](https://docs.python.org/3/library/functions.html#float) | [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]], z2: [float](https://docs.python.org/3/library/functions.html#float) | [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get angular diameter distance between two redshifts
$\frac{r}{1+z_2}\text{sin}_K\left(\frac{\chi(z_2) - \chi(z_1)}{r}\right)$
where $r$ is curvature radius and $\chi$ is the comoving radial distance.
If z_1 >= z_2 returns zero.
Must have called [`calc_background()`](#camb.results.CAMBdata.calc_background), [`calc_background_no_thermo()`](#camb.results.CAMBdata.calc_background_no_thermo) or calculated transfer
functions or power spectra.
* **Parameters:**
* **z1** – redshift 1, or array of redshifts
* **z2** – redshift 2, or array of redshifts
* **Returns:**
result (scalar or array of distances between pairs of z1, z2)
#### calc_background(params)
Calculate the background evolution and thermal history.
e.g. call this if you want to get derived parameters and call background functions
* **Parameters:**
**params** – [`CAMBparams`](model.md#camb.model.CAMBparams) instance to use
#### calc_background_no_thermo(params, do_reion=False)
Calculate the background evolution without calculating thermal or ionization history.
e.g. call this if you want to just use [`angular_diameter_distance()`](#camb.results.CAMBdata.angular_diameter_distance) and similar background functions
* **Parameters:**
* **params** – [`CAMBparams`](model.md#camb.model.CAMBparams) instance to use
* **do_reion** – whether to initialize the reionization model
#### calc_power_spectra(params=None)
Calculates transfer functions and power spectra.
* **Parameters:**
**params** – optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance with parameters to use
#### calc_transfers(params, only_transfers=True, only_time_sources=False)
Calculate the transfer functions (for CMB and matter power, as determined
by params.WantCls, params.WantTransfer).
* **Parameters:**
* **params** – [`CAMBparams`](model.md#camb.model.CAMBparams) instance with parameters to use
* **only_transfers** – only calculate transfer functions, no power spectra
* **only_time_sources** – only calculate time transfer functions, no (p,l,k) transfer functions or
non-linear scaling
* **Returns:**
non-zero if error, zero if OK
#### comoving_radial_distance(z: [float](https://docs.python.org/3/library/functions.html#float), tol=0.0001) → [float](https://docs.python.org/3/library/functions.html#float)
#### comoving_radial_distance(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]], tol=0.0001) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get comoving radial distance from us to redshift z in Mpc. This is efficient for arrays.
Must have called [`calc_background()`](#camb.results.CAMBdata.calc_background), [`calc_background_no_thermo()`](#camb.results.CAMBdata.calc_background_no_thermo) or calculated transfer
functions or power spectra.
* **Parameters:**
* **z** – redshift
* **tol** – numerical tolerance parameter
* **Returns:**
comoving radial distance (Mpc)
#### conformal_time(z: [float](https://docs.python.org/3/library/functions.html#float), presorted=None, tol=None) → [float](https://docs.python.org/3/library/functions.html#float)
#### conformal_time(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]], presorted=None, tol=None) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Conformal time from hot big bang to redshift z in Megaparsec.
* **Parameters:**
* **z** – redshift or array of redshifts
* **presorted** – if True, redshifts already sorted to be monotonically increasing, if False decreasing,
or if None unsorted. If presorted is True or False no checks are done.
* **tol** – integration tolerance
* **Returns:**
eta(z)/Mpc
#### conformal_time_a1_a2(a1: [float](https://docs.python.org/3/library/functions.html#float), a2: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### conformal_time_a1_a2(a1: [float](https://docs.python.org/3/library/functions.html#float) | [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]], a2: [float](https://docs.python.org/3/library/functions.html#float) | [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get conformal time between two scale factors (=comoving radial distance travelled by light on light cone)
* **Parameters:**
* **a1** – scale factor 1
* **a2** – scale factor 2
* **Returns:**
eta(a2)-eta(a1) = chi(a1)-chi(a2) in Megaparsec
#### copy()
Make an independent copy of this object.
* **Returns:**
a deep copy of self
#### cosmomc_theta()
Get $\theta_{\rm MC}$, an approximation of the ratio of the sound horizon to the angular diameter
distance at recombination.
* **Returns:**
$\theta_{\rm MC}$
#### *classmethod* dict(state)
Make an instance of the class from a dictionary of field values (used to restore from repr)
* **Parameters:**
**state** – dictionary of values
* **Returns:**
new instance
#### get_BAO(redshifts, params)
Get BAO parameters at given redshifts, using parameters in params
* **Parameters:**
* **redshifts** – list of redshifts
* **params** – optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance to use
* **Returns:**
array of rs/DV, H, DA, F_AP for each redshift as 2D array
#### get_Omega(var, z: [float](https://docs.python.org/3/library/functions.html#float) = 0) → [float](https://docs.python.org/3/library/functions.html#float)
#### get_Omega(var, z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get density relative to critical density of variables var
* **Parameters:**
* **var** – one of ‘K’, ‘cdm’, ‘baryon’, ‘photon’, ‘neutrino’ (massless), ‘nu’ (massive neutrinos), ‘de’
* **z** – redshift
* **Returns:**
$\Omega_i(a)$
#### get_background_densities(a: [float](https://docs.python.org/3/library/functions.html#float), vars=model.density_names, format='dict') → [dict](https://docs.python.org/3/library/stdtypes.html#dict) | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
#### get_background_densities(a: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]], vars=model.density_names, format='dict') → [dict](https://docs.python.org/3/library/stdtypes.html#dict) | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get the individual densities as a function of scale factor. Returns $8\pi G a^4 \rho_i$ in Mpc units.
$\Omega_i$ can be simply obtained by taking the ratio of the components to tot.
* **Parameters:**
* **a** – scale factor or array of scale factors
* **vars** – list of variables to output (default all)
* **format** – ‘dict’ or ‘array’, for either dict of 1D arrays indexed by name, or 2D array
* **Returns:**
n_a x len(vars) 2D numpy array or dict of 1D arrays of $8\pi G a^4 \rho_i$ in Mpc units.
#### get_background_outputs()
Get BAO values for redshifts set in Params.z_outputs
* **Returns:**
rs/DV, H, DA, F_AP for each requested redshift (as 2D array)
#### get_background_redshift_evolution(z, vars=['x_e', 'opacity', 'visibility', 'cs2b', 'T_b', 'dopacity', 'ddopacity', 'dvisibility', 'ddvisibility'], format='dict')
Get the evolution of background variables a function of redshift.
For the moment a and H are rather perversely only available via [`get_time_evolution()`](#camb.results.CAMBdata.get_time_evolution)
* **Parameters:**
* **z** – array of requested redshifts to output
* **vars** – list of variable names to output
* **format** – ‘dict’ or ‘array’, for either dict of 1D arrays indexed by name, or 2D array
* **Returns:**
n_eta x len(vars) 2D numpy array of outputs or dict of 1D arrays
#### get_background_time_evolution(eta: [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray), vars: [str](https://docs.python.org/3/library/stdtypes.html#str) | [list](https://docs.python.org/3/library/stdtypes.html#list)[[str](https://docs.python.org/3/library/stdtypes.html#str)] = ['x_e', 'opacity', 'visibility', 'cs2b', 'T_b', 'dopacity', 'ddopacity', 'dvisibility', 'ddvisibility'], format: [str](https://docs.python.org/3/library/stdtypes.html#str) = 'dict') → [dict](https://docs.python.org/3/library/stdtypes.html#dict)[[str](https://docs.python.org/3/library/stdtypes.html#str), [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get the evolution of background variables a function of conformal time.
For the moment a and H are rather perversely only available via [`get_time_evolution()`](#camb.results.CAMBdata.get_time_evolution)
* **Parameters:**
* **eta** – array of requested conformal times to output
* **vars** – list of variable names to output
* **format** – ‘dict’ or ‘array’, for either dict of 1D arrays indexed by name, or 2D array
* **Returns:**
n_eta x len(vars) 2D numpy array of outputs or dict of 1D arrays
#### get_cmb_correlation_functions(params=None, lmax=None, spectrum='lensed_scalar', xvals=None, sampling_factor=1)
Get the CMB correlation functions from the power spectra.
By default evaluated at points $\cos(\theta)$ = xvals that are roots of Legendre polynomials,
for accurate back integration with [`correlations.corr2cl()`](correlations.md#camb.correlations.corr2cl).
If xvals is explicitly given, instead calculates correlations at provided $\cos(\theta)$ values.
* **Parameters:**
* **params** – optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance with parameters to use. If None, must have
previously set parameters and called [`calc_power_spectra()`](#camb.results.CAMBdata.calc_power_spectra) (e.g. if you got this instance
using [`camb.get_results()`](camb.md#camb.get_results)),
* **lmax** – optional maximum L to use from the cls arrays
* **spectrum** – type of CMB power spectrum to get; default ‘lensed_scalar’, one of
[‘total’, ‘unlensed_scalar’, ‘unlensed_total’, ‘lensed_scalar’, ‘tensor’]
* **xvals** – optional array of $\cos(\theta)$ values at which to calculate correlation function.
* **sampling_factor** – multiple of lmax for the Gauss-Legendre order if xvals not given (default 1)
* **Returns:**
if xvals not given: corrs, xvals, weights; if xvals specified, just corrs.
corrs is 2D array corrs[i, ix], where ix=0,1,2,3 are T, Q+U, Q-U and cross, and i indexes xvals
#### get_cmb_power_spectra(params=None, lmax=None, spectra=('total', 'unlensed_scalar', 'unlensed_total', 'lensed_scalar', 'tensor', 'lens_potential'), CMB_unit=None, raw_cl=False)
Get CMB power spectra, as requested by the ‘spectra’ argument. All power spectra are
$\ell(\ell+1)C_\ell/2\pi$ self-owned numpy arrays (0..lmax, 0..3), where 0..3 index
are TT, EE, BB, TE, unless raw_cl is True in which case return just $C_\ell$.
For the lens_potential the power spectrum returned is that of the deflection.
Note that even if lmax is None, all spectra a returned to the same lmax, appropriate
for lensed spectra. Use the individual functions instead if you want to the full unlensed
and lensing potential power spectra to the higher lmax actually computed.
* **Parameters:**
* **params** – optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance with parameters to use. If None, must have
previously set parameters and called calc_power_spectra (e.g. if you got this instance
using [`camb.get_results()`](camb.md#camb.get_results)),
* **lmax** – maximum L
* **spectra** – list of names of spectra to get
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
and $\mu K$ units for lensing cross.
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
dictionary of power spectrum arrays, indexed by names of requested spectra
#### get_cmb_transfer_data(tp='scalar')
Get $C_\ell$ transfer functions
* **Returns:**
[`ClTransferData`](#camb.results.ClTransferData) instance holding output arrays (copies, not pointers)
#### get_cmb_unlensed_scalar_array_dict(params=None, lmax=None, CMB_unit=None, raw_cl=False)
Get all unlensed auto and cross power spectra, including any custom source functions set
using [`model.CAMBparams.set_custom_scalar_sources()`](model.md#camb.model.CAMBparams.set_custom_scalar_sources).
* **Parameters:**
* **params** – optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance with parameters to use. If None, must have
previously set parameters and called [`calc_power_spectra()`](#camb.results.CAMBdata.calc_power_spectra) (e.g. if you got this instance
using [`camb.get_results()`](camb.md#camb.get_results)),
* **lmax** – maximum $\ell$
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for
CMB $C_\ell$ and $\mu K$ units for lensing cross.
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
dictionary of power spectrum arrays, index as TxT, TxE, PxW1, W1xW2, custom_name_1xT… etc.
Note that P is the lensing deflection, lensing windows Wx give convergence.
#### get_dark_energy_rho_w(a: [float](https://docs.python.org/3/library/functions.html#float)) → [tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[float](https://docs.python.org/3/library/functions.html#float), [float](https://docs.python.org/3/library/functions.html#float)]
#### get_dark_energy_rho_w(a: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray), [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)]
Get dark energy density in units of the dark energy density today, and equation of state parameter
$w\equiv P/\rho$
* **Parameters:**
**a** – scalar factor or array of scale factors
* **Returns:**
rho, w arrays at redshifts $1/a-1$ [or scalars if $a$ is scalar]
#### get_derived_params()
* **Returns:**
dictionary of derived parameter values, indexed by name (age, zstar, thetastar, etc.)
Definitions of derived parmeters follow those in the Planck parameter papers.
Note that all theta_\_ derived parameters here are scaled by a factor of 100 for historical reasons.
#### get_fsigma8()
Get $f\sigma_8$ growth values (must previously have calculated power spectra).
For general models $f\sigma_8$ is defined as in the Planck 2015 parameter paper in terms of
the velocity-density correlation: $\sigma^2_{vd}/\sigma_{dd}$ for $8 h^{-1} {\rm Mpc}$ spheres.
* **Returns:**
array of f\*sigma_8 values, in order of increasing time (decreasing redshift)
#### get_lens_potential_cls(lmax=None, CMB_unit=None, raw_cl=False)
Get lensing deflection angle potential power spectrum, and cross-correlation with T and E. Must have already
calculated power spectra.
Power spectra are $[L(L+1)]^2C_L^{\phi\phi}/2\pi$ and corresponding deflection cross-correlations.
* **Parameters:**
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K$ units for lensing cross.
* **raw_cl** – return lensing potential $C_L$ rather than $[L(L+1)]^2C_L/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:3], where 0..2 indexes PP, PT, PE.
#### get_lensed_cls_with_spectrum(clpp, lmax=None, CMB_unit=None, raw_cl=False)
Get lensed CMB power spectra using curved-sky correlation function method, using
cpp as the lensing spectrum. Useful for e.g. getting partially-delensed spectra.
* **Parameters:**
* **clpp** – array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum (zero based)
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:4], where 0..3 indexes TT, EE, BB, TE.
#### get_lensed_gradient_cls(lmax=None, CMB_unit=None, raw_cl=False, clpp=None)
Get lensed gradient scalar CMB power spectra in flat sky approximation
([arXiv:1101.2234](https://arxiv.org/abs/1101.2234)).
Note that lmax used to calculate results may need to be substantially larger than the lmax
output from this function (there is no extrapolation as in the main lensing routines).
Lensed power spectra must be already calculated.
* **Parameters:**
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **clpp** – custom array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum
to use (zero based), rather than calculated spectrum from this model
* **Returns:**
numpy array CL[0:lmax+1,0:8], where CL[:,i] are $T\nabla T$, $E\nabla E$,
$B\nabla B$, $PP_\perp$, $T\nabla E$, $TP_\perp$, $(\nabla T)^2$,
$\nabla T\nabla T$ where the first six are as defined in appendix C
of [1101.2234](https://arxiv.org/abs/1101.2234).
#### get_lensed_scalar_cls(lmax=None, CMB_unit=None, raw_cl=False)
Get lensed scalar CMB power spectra. Must have already calculated power spectra.
* **Parameters:**
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:4], where 0..3 indexes TT, EE, BB, TE.
#### get_linear_matter_power_spectrum(var1=None, var2=None, hubble_units=True, k_hunit=True, have_power_spectra=True, params=None, nonlinear=False)
Calculates $P_{xy}(k)$, where x, y are one of model.Transfer_cdm, model.Transfer_xx etc.
The output k values are not regularly spaced, and not interpolated. They are either k or k/h depending on the
value of k_hunit (default True gives k/h).
For a description of outputs for different var1, var2 see [Matter power spectrum and matter transfer function variables](transfer_variables.md#transfer-variables).
* **Parameters:**
* **var1** – variable i (index, or name of variable; default delta_tot)
* **var2** – variable j (index, or name of variable; default delta_tot)
* **hubble_units** – if true, output power spectrum in (Mpc/h) units, otherwise Mpc
* **k_hunit** – if true, matter power is a function of k/h, if false, just k (both ${\rm Mpc}^{-1}$ units)
* **have_power_spectra** – set to False if not already computed power spectra
* **params** – if have_power_spectra=False, optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance
to specify new parameters
* **nonlinear** – include non-linear correction from halo model
* **Returns:**
k/h or k, z, PK, where kz and z are arrays of k/h or k and z respectively,
and PK[i,j] is the value at z[i], k[j]/h or k[j]
#### get_matter_power_interpolator(nonlinear=True, var1=None, var2=None, hubble_units=True, k_hunit=True, return_z_k=False, log_interp=True, extrap_kmax=None, silent=False)
Assuming transfers have been calculated, return a 2D spline interpolation object to evaluate matter
power spectrum as function of z and k/h (or k). Uses self.Params.Transfer.PK_redshifts as the spline node
points in z. If fewer than four redshift points are used the interpolator uses a reduced order spline in z
(so results at intermediate z may be inaccurate), otherwise it uses bicubic.
Usage example:
```python
PK = results.get_matter_power_interpolator()
print("Power spectrum at z=0.5, k/h=0.1 is %s (Mpc/h)^3 " % (PK.P(0.5, 0.1)))
```
For a description of outputs for different var1, var2 see [Matter power spectrum and matter transfer function variables](transfer_variables.md#transfer-variables).
* **Parameters:**
* **nonlinear** – include non-linear correction from halo model
* **var1** – variable i (index, or name of variable; default delta_tot)
* **var2** – variable j (index, or name of variable; default delta_tot)
* **hubble_units** – if true, output power spectrum in $({\rm Mpc}/h)^{3}$ units,
otherwise ${\rm Mpc}^{3}$
* **k_hunit** – if true, matter power is a function of k/h, if false, just k (both ${\rm Mpc}^{-1}$ units)
* **return_z_k** – if true, return interpolator, z, k where z, k are the grid used
* **log_interp** – if true, interpolate log of power spectrum
(unless any values cross zero in which case ignored)
* **extrap_kmax** – if set, use power law extrapolation beyond kmax to extrap_kmax
(useful for tails of integrals)
* **silent** – Set True to silence warnings
* **Returns:**
An object PK based on [`RectBivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html#scipy.interpolate.RectBivariateSpline),
that can be called with PK.P(z,kh) or PK(z,log(kh)) to get log matter power values.
If return_z_k=True, instead return interpolator, z, k where z, k are the grid used.
#### get_matter_power_spectrum(minkh=0.0001, maxkh=1.0, npoints=100, var1=None, var2=None, have_power_spectra=False, params=None)
Calculates $P_{xy}(k/h)$, where x, y are one of Transfer_cdm, Transfer_xx etc.
The output k values are regularly log spaced and interpolated. If NonLinear is set, the result is non-linear.
For a description of outputs for different var1, var2 see [Matter power spectrum and matter transfer function variables](transfer_variables.md#transfer-variables).
* **Parameters:**
* **minkh** – minimum value of k/h for output grid (very low values < 1e-4 may not be calculated)
* **maxkh** – maximum value of k/h (check consistent with input params.Transfer.kmax)
* **npoints** – number of points equally spaced in log k
* **var1** – variable i (index, or name of variable; default delta_tot)
* **var2** – variable j (index, or name of variable; default delta_tot)
* **have_power_spectra** – set to True if already computed power spectra
* **params** – if have_power_spectra=False and want to specify new parameters,
a [`CAMBparams`](model.md#camb.model.CAMBparams) instance
* **Returns:**
kh, z, PK, where kz and z are arrays of k/h and z respectively, and PK[i,j] is value at z[i], k/h[j]
#### get_matter_transfer_data() → [MatterTransferData](#camb.results.MatterTransferData)
Get matter transfer function data and sigma8 for calculated results.
* **Returns:**
[`MatterTransferData`](#camb.results.MatterTransferData) instance holding output arrays (copies, not pointers)
#### get_nonlinear_matter_power_spectrum(var1=None, var2=None, hubble_units=True, k_hunit=True, have_power_spectra=True, params=None)
Calculates $P_{xy}(k/h)$, where x, y are one of model.Transfer_cdm, model.Transfer_xx etc.
The output k values are not regularly spaced, and not interpolated.
For a description of outputs for different var1, var2 see [Matter power spectrum and matter transfer function variables](transfer_variables.md#transfer-variables).
* **Parameters:**
* **var1** – variable i (index, or name of variable; default delta_tot)
* **var2** – variable j (index, or name of variable; default delta_tot)
* **hubble_units** – if true, output power spectrum in $({\rm Mpc}/h)^{3}$ units,
otherwise ${\rm Mpc}^{3}$
* **k_hunit** – if true, matter power is a function of k/h, if false, just k (both ${\rm Mpc}^{-1}$ units)
* **have_power_spectra** – set to False if not already computed power spectra
* **params** – if have_power_spectra=False, optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance
to specify new parameters
* **Returns:**
k/h or k, z, PK, where kz and z are arrays of k/h or k and z respectively,
and PK[i,j] is the value at z[i], k[j]/h or k[j]
#### get_partially_lensed_cls(Alens: [float](https://docs.python.org/3/library/functions.html#float) | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray), lmax=None, CMB_unit=None, raw_cl=False)
Get lensed CMB power spectra using curved-sky correlation function method, using
true lensing spectrum scaled by Alens. Alens can be an array in L for realistic delensing estimates.
Note that if Params.Alens is also set, the result is scaled by the product of both
* **Parameters:**
* **Alens** – scaling of the lensing relative to true, with Alens=1 being the standard result.
Can be a scalar in which case all L are scaled, or a zero-based array with the L by L scaling
(with L larger than the size of the array having Alens_L=1).
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:4], where 0..3 indexes TT, EE, BB, TE.
#### get_redshift_evolution(q, z, vars=['k/h', 'delta_cdm', 'delta_baryon', 'delta_photon', 'delta_neutrino', 'delta_nu', 'delta_tot', 'delta_nonu', 'delta_tot_de', 'Weyl', 'v_newtonian_cdm', 'v_newtonian_baryon', 'v_baryon_cdm', 'a', 'etak', 'H', 'growth', 'v_photon', 'pi_photon', 'E_2', 'v_neutrino', 'T_source', 'E_source', 'lens_potential_source'], lAccuracyBoost=4)
Get the mode evolution as a function of redshift for some k values.
* **Parameters:**
* **q** – wavenumber values to calculate (or array of k values)
* **z** – array of redshifts to output
* **vars** – list of variable names or camb.symbolic sympy expressions to output
* **lAccuracyBoost** – boost factor for ell accuracy (e.g. to get nice smooth curves for plotting)
* **Returns:**
nd array, A_{qti}, size(q) x size(times) x len(vars), or 2d array if q is scalar
#### get_sigma8()
Get $\sigma_8$ values at Params.PK_redshifts (must previously have calculated power spectra)
* **Returns:**
array of $\sigma_8$ values, in order of increasing time (decreasing redshift)
#### get_sigma8_0()
Get $\sigma_8$ value today (must previously have calculated power spectra)
* **Returns:**
$\sigma_8$ today
#### get_sigmaR(R: [float](https://docs.python.org/3/library/functions.html#float), z_indices: [int](https://docs.python.org/3/library/functions.html#int), var1=None, var2=None, hubble_units=True, return_R_z=False) → [float](https://docs.python.org/3/library/functions.html#float)
#### get_sigmaR(R: [float](https://docs.python.org/3/library/functions.html#float) | [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]], z_indices: [int](https://docs.python.org/3/library/functions.html#int) | [list](https://docs.python.org/3/library/stdtypes.html#list)[[int](https://docs.python.org/3/library/functions.html#int)] | [None](https://docs.python.org/3/library/constants.html#None) = None, var1=None, var2=None, hubble_units=True, return_R_z=False) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Calculate $\sigma_R$ values, the RMS linear matter fluctuation in spheres of radius R in linear theory.
Accuracy depends on the sampling with which the matter transfer functions are computed.
For a description of outputs for different var1, var2 see [Matter power spectrum and matter transfer function variables](transfer_variables.md#transfer-variables).
Note that numerical errors are slightly different to get_sigma8 for R=8 Mpc/h.
* **Parameters:**
* **R** – radius in Mpc or h^{-1} Mpc units, scalar or array
* **z_indices** – indices of redshifts in Params.Transfer.PK_redshifts to calculate
(default None gives all computed in order of increasing time (decreasing redshift);
-1 gives redshift 0; list gives all listed indices)
* **var1** – variable i (index, or name of variable; default delta_tot)
* **var2** – variable j (index, or name of variable; default delta_tot)
* **hubble_units** – if true, R is in h^{-1} Mpc, otherwise Mpc
* **return_R_z** – if true, return tuple of R, z, sigmaR
(where R always Mpc units not h^{-1}Mpc and R, z are arrays)
* **Returns:**
array of $\sigma_R$ values, or 2D array indexed by (redshift, R)
#### get_source_cls_dict(params=None, lmax=None, raw_cl=False)
Get all source window function and CMB lensing and cross power spectra. Does not include CMB spectra.
Note that P is the deflection angle, but lensing windows return the kappa power.
* **Parameters:**
* **params** – optional [`CAMBparams`](model.md#camb.model.CAMBparams) instance with parameters to use. If None, must have
previously set parameters and called [`calc_power_spectra()`](#camb.results.CAMBdata.calc_power_spectra)
(e.g. if you got this instance using [`camb.get_results()`](camb.md#camb.get_results)),
* **lmax** – maximum $\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
dictionary of power spectrum arrays, index as PXP, PxW1, W1xW2, … etc.
#### get_tensor_cls(lmax=None, CMB_unit=None, raw_cl=False)
Get tensor CMB power spectra. Must have already calculated power spectra.
* **Parameters:**
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:4], where 0..3 indexes TT, EE, BB, TE
#### get_time_evolution(q, eta, vars=['k/h', 'delta_cdm', 'delta_baryon', 'delta_photon', 'delta_neutrino', 'delta_nu', 'delta_tot', 'delta_nonu', 'delta_tot_de', 'Weyl', 'v_newtonian_cdm', 'v_newtonian_baryon', 'v_baryon_cdm', 'a', 'etak', 'H', 'growth', 'v_photon', 'pi_photon', 'E_2', 'v_neutrino', 'T_source', 'E_source', 'lens_potential_source'], lAccuracyBoost=4, frame='CDM')
Get the mode evolution as a function of conformal time for some k values.
Note that gravitational potentials (e.g. Weyl) are not integrated in the code and are
calculated as derived parameters; they may be numerically unstable far outside the horizon.
(use the series expansion result if needed far outside the horizon)
* **Parameters:**
* **q** – wavenumber values to calculate (or array of k values)
* **eta** – array of requested conformal times to output
* **vars** – list of variable names or sympy symbolic expressions to output (using camb.symbolic)
* **lAccuracyBoost** – factor by which to increase l_max in hierarchies compared to default - often
needed to get nice smooth curves of acoustic oscillations for plotting.
* **frame** – for symbolic expressions, can specify frame name if the variable is not gauge invariant.
e.g. specifying Delta_g and frame=’Newtonian’ would give the Newtonian gauge photon density perturbation.
* **Returns:**
nd array, A_{qti}, size(q) x size(times) x len(vars), or 2d array if q is scalar
#### get_total_cls(lmax=None, CMB_unit=None, raw_cl=False)
Get lensed-scalar + tensor CMB power spectra. Must have already calculated power spectra.
* **Parameters:**
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:4], where 0..3 indexes TT, EE, BB, TE
#### get_unlensed_scalar_array_cls(lmax=None)
Get array of all cross power spectra. Must have already calculated power spectra.
Results are dimensionless, and not scaled by custom_scaled_ell_fac.
* **Parameters:**
**lmax** – lmax to output to
* **Returns:**
numpy array CL[0:, 0:,0:lmax+1], where 0.. index T, E, lensing potential, source window functions
#### get_unlensed_scalar_cls(lmax=None, CMB_unit=None, raw_cl=False)
Get unlensed scalar CMB power spectra. Must have already calculated power spectra.
* **Parameters:**
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:4], where 0..3 indexes TT, EE, BB, TE. CL[:,2] will be zero.
#### get_unlensed_total_cls(lmax=None, CMB_unit=None, raw_cl=False)
Get unlensed CMB power spectra, including tensors if relevant. Must have already calculated power spectra.
* **Parameters:**
* **lmax** – lmax to output to
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **Returns:**
numpy array CL[0:lmax+1,0:4], where 0..3 indexes TT, EE, BB, TE.
#### h_of_z(z: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### h_of_z(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get Hubble rate at redshift z, in ${\rm Mpc}^{-1}$ units, scalar or array
Must have called [`calc_background()`](#camb.results.CAMBdata.calc_background), [`calc_background_no_thermo()`](#camb.results.CAMBdata.calc_background_no_thermo) or calculated transfer functions
or power spectra.
Use hubble_parameter instead if you want in [km/s/Mpc] units.
* **Parameters:**
**z** – redshift
* **Returns:**
H(z)
#### hubble_parameter(z: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### hubble_parameter(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get Hubble rate at redshift z, in km/s/Mpc units. Scalar or array.
Must have called [`calc_background()`](#camb.results.CAMBdata.calc_background), [`calc_background_no_thermo()`](#camb.results.CAMBdata.calc_background_no_thermo) or calculated transfer functions
or power spectra.
* **Parameters:**
**z** – redshift
* **Returns:**
H(z)/[km/s/Mpc]
#### luminosity_distance(z: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### luminosity_distance(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get luminosity distance from to redshift z.
Must have called [`calc_background()`](#camb.results.CAMBdata.calc_background), [`calc_background_no_thermo()`](#camb.results.CAMBdata.calc_background_no_thermo) or calculated transfer functions or
power spectra.
* **Parameters:**
**z** – redshift or array of redshifts
* **Returns:**
luminosity distance (matches rank of z)
#### physical_time(z: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### physical_time(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get physical time from hot big bang to redshift z in Julian Gigayears.
* **Parameters:**
**z** – redshift
* **Returns:**
t(z)/Gigayear
#### physical_time_a1_a2(a1, a2)
Get physical time between two scalar factors in Julian Gigayears
Must have called [`calc_background()`](#camb.results.CAMBdata.calc_background), [`calc_background_no_thermo()`](#camb.results.CAMBdata.calc_background_no_thermo) or calculated transfer functions
or power spectra.
* **Parameters:**
* **a1** – scale factor 1
* **a2** – scale factor 2
* **Returns:**
(age(a2)-age(a1))/Gigayear
#### power_spectra_from_transfer(initial_power_params=None, silent=False)
Assuming [`calc_transfers()`](#camb.results.CAMBdata.calc_transfers) or [`calc_power_spectra()`](#camb.results.CAMBdata.calc_power_spectra) have already been used, re-calculate the
power spectra using a new set of initial power spectrum parameters with otherwise the same cosmology.
This is typically much faster that re-calculating everything, as the transfer functions can be re-used.
NOTE: if non-linear lensing is on, the transfer functions have the non-linear correction included when
they are calculated, so using this function with a different initial power spectrum will not give quite the
same results as doing a full recalculation.
* **Parameters:**
* **initial_power_params** – [`initialpower.InitialPowerLaw`](initialpower.md#camb.initialpower.InitialPowerLaw)
or [`initialpower.SplinedInitialPower`](initialpower.md#camb.initialpower.SplinedInitialPower)
instance with new primordial power spectrum parameters, or None to use current power spectrum.
* **silent** – suppress warnings about non-linear corrections not being recalculated
#### redshift_at_comoving_radial_distance(chi: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### redshift_at_comoving_radial_distance(chi: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Convert comoving radial distance array to redshift array.
* **Parameters:**
**chi** – comoving radial distance (in Mpc), scalar or array
* **Returns:**
redshift at chi, scalar or array
#### redshift_at_conformal_time(eta: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### redshift_at_conformal_time(eta: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Convert conformal time array to redshift array.
Note that this function requires the transfers or background to have been calculated with no_thermo=False
(the default).
* **Parameters:**
**eta** – conformal time from bing bang (in Mpc), scalar or array
* **Returns:**
redshift at eta, scalar or array
#### replace(instance)
Replace the content of this class with another instance, doing a deep copy (in Fortran)
* **Parameters:**
**instance** – instance of the same class to replace this instance with
#### save_cmb_power_spectra(filename, lmax=None, CMB_unit='muK')
Save CMB power to a plain text file. Output is lensed total $\ell(\ell+1)C_\ell/2\pi$ then
lensing potential and cross: L TT EE BB TE PP PT PE.
* **Parameters:**
* **filename** – filename to save
* **lmax** – lmax to save
* **CMB_unit** – scale results from dimensionless. Use ‘muK’ for $\mu K^2$ units for CMB $C_\ell$
and $\mu K$ units for lensing cross.
#### set_params(params)
Set parameters from params. Note that this does not recompute anything;
you will need to call [`calc_transfers()`](#camb.results.CAMBdata.calc_transfers) if you change any parameters affecting the
background cosmology or the transfer function settings.
* **Parameters:**
**params** – a [`CAMBparams`](model.md#camb.model.CAMBparams) instance
#### sound_horizon(z: [float](https://docs.python.org/3/library/functions.html#float)) → [float](https://docs.python.org/3/library/functions.html#float)
#### sound_horizon(z: [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number) | [float](https://docs.python.org/3/library/functions.html#float) | [int](https://docs.python.org/3/library/functions.html#int)] | [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)[[tuple](https://docs.python.org/3/library/stdtypes.html#tuple)[[Any](https://docs.python.org/3/library/typing.html#typing.Any), ...], [dtype](https://numpy.org/doc/stable/reference/generated/numpy.dtype.html#numpy.dtype)[[number](https://numpy.org/doc/stable/reference/arrays.scalars.html#numpy.number)]]) → [ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html#numpy.ndarray)
Get comoving sound horizon as function of redshift in Megaparsecs, the integral of the sound speed
up to given redshift.
* **Parameters:**
**z** – redshift or array of redshifts
* **Returns:**
r_s(z)
### *class* camb.results.MatterTransferData
MatterTransferData is the base class for storing matter power transfer function data for various q values.
In a flat universe q=k, in a closed universe q is quantized.
To get an instance of this data, call [`results.CAMBdata.get_matter_transfer_data()`](#camb.results.CAMBdata.get_matter_transfer_data).
For a description of the different Transfer_xxx outputs (and 21cm case) see [Matter power spectrum and matter transfer function variables](transfer_variables.md#transfer-variables); the
array is indexed by index+1 given by:
- Transfer_kh = 1 (k/h)
- Transfer_cdm = 2 (cdm)
- Transfer_b = 3 (baryons)
- Transfer_g = 4 (photons)
- Transfer_r = 5 (massless neutrinos)
- Transfer_nu = 6 (massive neutrinos)
- Transfer_tot = 7 (total matter)
- Transfer_nonu = 8 (total matter excluding neutrinos)
- Transfer_tot_de = 9 (total including dark energy perturbations)
- Transfer_Weyl = 10 (Weyl potential)
- Transfer_Newt_vel_cdm = 11 (Newtonian CDM velocity)
- Transfer_Newt_vel_baryon = 12 (Newtonian baryon velocity)
- Transfer_vel_baryon_cdm = 13 (relative baryon-cdm velocity)
* **Variables:**
* **nq** – number of q modes calculated
* **q** – array of q values calculated
* **sigma_8** – array of $\sigma_8$ values for each redshift
* **sigma2_vdelta_8** – array of v-delta8 correlation, so sigma2_vdelta_8/sigma_8 can define growth
* **transfer_data** – numpy array T[entry, q_index, z_index] storing transfer functions for each redshift and q;
entry+1 can be one of the Transfer_xxx variables above.
#### transfer_z(name, z_index=0)
Get transfer function (function of q, for each q in self.q_trans) by name for given redshift index
* **Parameters:**
* **name** – parameter name
* **z_index** – which redshift
* **Returns:**
array of transfer function values for each calculated k
### *class* camb.results.ClTransferData
ClTransferData is the base class for storing CMB power transfer functions, as a function of q and $\ell$.
To get an instance of this data, call [`results.CAMBdata.get_cmb_transfer_data()`](#camb.results.CAMBdata.get_cmb_transfer_data)
* **Variables:**
* **NumSources** – number of sources calculated (size of p index)
* **q** – array of q values calculated (=k in flat universe)
* **L** – int array of $\ell$ values calculated
* **delta_p_l_k** – transfer functions, indexed by source, L, q
#### get_transfer(source=0)
Return $C_\ell$ transfer functions as a function of $\ell$
and $q$ ($= k$ in a flat universe).
* **Parameters:**
**source** – index of source: e.g. 0 for temperature, 1 for E polarization, 2 for lensing potential
* **Returns:**
array of computed L, array of computed q, transfer functions T(L,q)
## https://camb.readthedocs.io/en/latest/symbolic.html
# Symbolic manipulation
This module defines the scalar linear perturbation equations for standard LCDM cosmology, using sympy.
It uses the covariant perturbation notation, but includes functions to project into the
Newtonian or synchronous gauge, as well as constructing general gauge invariant quantities.
It uses “t” as the conformal time variable (=tau in the fortran code).
For a guide to usage and content see the [ScalEqs notebook](https://camb.readthedocs.io/en/latest/ScalEqs.html)
As well as defining standard quantities, and how they map to CAMB variables, there are also functions for
converting a symbolic expression to CAMB source code, and compiling custom sources for use with CAMB
(as used by [`model.CAMBparams.set_custom_scalar_sources()`](model.md#camb.model.CAMBparams.set_custom_scalar_sources), [`results.CAMBdata.get_time_evolution()`](results.md#camb.results.CAMBdata.get_time_evolution))
A Lewis July 2017
### camb.symbolic.LinearPerturbation(name, species=None, camb_var=None, camb_sub=None, frame_dependence=None, description=None)
Returns as linear perturbation variable, a function of conformal time t.
Use help(x) to quickly view all quantities defined for the result.
* **Parameters:**
* **name** – sympy name for the Function
* **species** – tag for the species if relevant (not used)
* **camb_var** – relevant CAMB fortran variable
* **camb_sub** – if not equal to camb_var, and string giving the expression in CAMB variables
* **frame_dependence** – the change in the perturbation when the frame 4-velocity u change
from u to u + delta_frame. Should be a numpy expression involving delta_frame.
* **description** – string describing variable
* **Returns:**
sympy Function instance (function of t), with attributes set to the arguments above.
### camb.symbolic.camb_fortran(expr, name='camb_function', frame='CDM', expand=False)
Convert symbolic expression to CAMB fortran code, using CAMB variable notation.
This is not completely general, but it will handle conversion of Newtonian gauge
variables like Psi_N, and most derivatives up to second order.
* **Parameters:**
* **expr** – symbolic sympy expression using camb.symbolic variables and functions (plus any
standard general functions that CAMB can convert to fortran).
* **name** – lhs variable string to assign result to
* **frame** – frame in which to interpret non gauge-invariant expressions.
By default, uses CDM frame (synchronous gauge), as used natively by CAMB.
* **expand** – do a sympy expand before generating code
* **Returns:**
fortran code snippet
### camb.symbolic.cdm_gauge(x)
Evaluates an expression in the CDM frame $(v_c=0, A=0)$. Equivalent to the synchronous gauge
but using the covariant variable names.
* **Parameters:**
**x** – expression
* **Returns:**
expression evaluated in CDM frame.
### camb.symbolic.compile_source_function_code(code_body, file_path='', compiler=None, fflags=None, cache=True)
Compile fortran code into function pointer in compiled shared library.
The function is not intended to be called from python, but for passing back to compiled CAMB.
* **Parameters:**
* **code_body** – fortran code to do calculation and assign sources(i) output array.
Can start with declarations of temporary variables if needed.
* **file_path** – optional output path for generated f90 code
* **compiler** – compiler, usually on path
* **fflags** – options for compiler
* **cache** – whether to cache the result
* **Returns:**
function pointer for compiled code
### *class* camb.symbolic.f_K(\*args)
### camb.symbolic.get_hierarchies(lmax=5)
Get Boltzmann hierarchies up to lmax for photons (J), E polarization and massless neutrinos (G).
* **Parameters:**
**lmax** – maximum multipole
* **Returns:**
list of equations
### camb.symbolic.get_scalar_temperature_sources(checks=False)
Derives terms in line of sight source, after integration by parts so that only integrated against
a Bessel function (no derivatives).
* **Parameters:**
**checks** – True to do consistency checks on result
* **Returns:**
monopole_source, ISW, doppler, quadrupole_source
### camb.symbolic.make_frame_invariant(expr, frame='CDM')
Makes the quantity gauge invariant, assuming currently evaluated in frame ‘frame’.
frame can either be a string frame name, or a variable that is zero in the current frame,
e.g. frame = Delta_g gives the constant photon density frame.
So make_frame_invariant(sigma, frame=Delta_g) will return the combination of sigma and Delta_g
that is frame invariant (and equal to just sigma when Delta_g=0).
### camb.symbolic.newtonian_gauge(x)
Evaluates an expression in the Newtonian gauge (zero shear, sigma=0).
Converts to using conventional metric perturbation variables for metric
$$
ds^2 = a^2\left( (1+2\Psi_N)d t^2 - (1-2\Phi_N)\delta_{ij}dx^idx^j\right)
$$
* **Parameters:**
**x** – expression
* **Returns:**
expression evaluated in the Newtonian gauge
### camb.symbolic.synchronous_gauge(x)
evaluates an expression in the synchronous gauge, using conventional synchronous-gauge variables.
* **Parameters:**
**x** – expression
* **Returns:**
synchronous gauge variable expression
## https://camb.readthedocs.io/en/latest/bbn.html
# BBN models
### *class* camb.bbn.BBNIterpolator(x, y, z, bbox=[None, None, None, None], kx=3, ky=3, s=0, maxit=20)
### *class* camb.bbn.BBNPredictor
The base class for making BBN predictions for Helium abundance
#### DH(ombh2, delta_neff=0.0)
Get deuterium ratio D/H. Must be implemented by extensions.
* **Parameters:**
* **ombh2** – $\Omega_b h^2$
* **delta_neff** – additional N_eff relative to standard value (of 3.044)
* **Returns:**
D/H
#### Y_He(ombh2, delta_neff=0.0)
Get BBN helium mass fraction for CMB code.
* **Parameters:**
* **ombh2** – $\Omega_b h^2$
* **delta_neff** – additional N_eff relative to standard value (of 3.044)
* **Returns:**
Y_He helium mass fraction predicted by BBN
#### Y_p(ombh2, delta_neff=0.0)
Get BBN helium nucleon fraction. Must be implemented by extensions.
* **Parameters:**
* **ombh2** – $\Omega_b h^2$
* **delta_neff** – additional N_eff relative to standard value (of 3.044)
* **Returns:**
Y_p helium nucleon fraction predicted by BBN
### *class* camb.bbn.BBN_fitting_parthenope(tau_neutron=None)
Old BBN predictions for Helium abundance using fitting formulae based on Parthenope (pre 2015).
#### DH(ombh2, delta_neff, tau_neutron=None)
Get deuterium ratio D/H. Must be implemented by extensions.
* **Parameters:**
* **ombh2** – $\Omega_b h^2$
* **delta_neff** – additional N_eff relative to standard value (of 3.044)
* **Returns:**
D/H
#### Y_p(ombh2, delta_neff=0.0, tau_neutron=None)
Get BBN helium nucleon fraction.
# Parthenope fits, as in Planck 2015 papers
* **Parameters:**
* **ombh2** – $\Omega_b h^2$
* **delta_neff** – additional N_eff relative to standard value (of 3.046 for consistency with Planck)
* **tau_neutron** – neutron lifetime
* **Returns:**
$Y_p^{\rm BBN}$ helium nucleon fraction predicted by BBN
### *class* camb.bbn.BBN_table_interpolator(interpolation_table='PRIMAT_Yp_DH_ErrorMC_2021.dat', function_of=('ombh2', 'DeltaN'))
BBN predictor based on interpolation from a numerical table calculated by a BBN code.
Tables are supplied for [Parthenope](http://parthenope.na.infn.it/) 2017 (PArthENoPE_880.2_standard.dat),
similar but with Marucci rates (PArthENoPE_880.2_marcucci.dat),
[PRIMAT](http://www2.iap.fr/users/pitrou/primat.htm) (PRIMAT_Yp_DH_Error.dat, PRIMAT_Yp_DH_ErrorMC_2021.dat).
* **Parameters:**
* **interpolation_table** – filename of interpolation table to use.
* **function_of** – two variables that determine the interpolation grid (x,y) in the table,
matching top column label comment. By default ombh2, DeltaN, and function argument names reflect that,
but can also be used more generally.
#### DH(ombh2, delta_neff=0.0, grid=False)
Get deuterium ratio D/H by interpolation in table
* **Parameters:**
* **ombh2** – $\Omega_b h^2$ (or, more generally, value of function_of[0])
* **delta_neff** – additional N_eff relative to standard value (of 3.044) (or value of function_of[1])
* **grid** – parameter for [`RectBivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html#scipy.interpolate.RectBivariateSpline) (whether to evaluate the
results on a grid spanned by the input arrays, or at points specified by the input arrays)
* **Returns:**
D/H
#### Y_p(ombh2, delta_neff=0.0, grid=False)
Get BBN helium nucleon fraction by intepolation in table.
* **Parameters:**
* **ombh2** – $\Omega_b h^2$ (or, more generally, value of function_of[0])
* **delta_neff** – additional N_eff relative to standard value (of 3.044) (or value of function_of[1])
* **grid** – parameter for [`RectBivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html#scipy.interpolate.RectBivariateSpline) (whether to evaluate the
results on a grid spanned by the input arrays, or at points specified by the input arrays)
* **Returns:**
Y_p helium nucleon fraction predicted by BBN. Call Y_He() to get mass fraction instead.
#### get(name, ombh2, delta_neff=0.0, grid=False)
Get value for variable “name” by interpolation from table (where name is given in the column header comment)
For example get(‘sig(D/H)’,0.0222,0) to get the error on D/H
* **Parameters:**
* **name** – string name of the parameter, as given in header of interpolation table
* **ombh2** – $\Omega_b h^2$ (or, more generally, value of function_of[0])
* **delta_neff** – additional N_eff relative to standard value (of 3.044) (or value of function_of[1])
* **grid** – parameter for [`RectBivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html#scipy.interpolate.RectBivariateSpline) (whether to evaluate the
results on a grid spanned by the input arrays, or at points specified by the input arrays)
* **Returns:**
Interpolated value (or grid)
### camb.bbn.get_predictor(predictor_name=None)
Get instance of default BBNPredictor class. Currently numerical table interpolation as Planck 2018 analysis.
## https://camb.readthedocs.io/en/latest/dark_energy.html
# Dark Energy models
### *class* camb.dark_energy.DarkEnergyModel(\*args, \*\*kwargs)
Abstract base class for dark energy model implementations.
### *class* camb.dark_energy.DarkEnergyEqnOfState(\*args, \*\*kwargs)
Bases: [`DarkEnergyModel`](#camb.dark_energy.DarkEnergyModel)
Abstract base class for models using w and wa parameterization with use w(a) = w + (1-a)\*wa parameterization,
or call set_w_a_table to set another tabulated w(a). If tabulated w(a) is used, w and wa are set
to approximate values at z=0.
See [`model.CAMBparams.set_initial_power_function()`](model.md#camb.model.CAMBparams.set_initial_power_function) for a convenience constructor function to
set a general interpolated P(k) model from a python function.
* **Variables:**
* **w** – (*float64*) w(0)
* **wa** – (*float64*) -dw/da(0)
* **cs2** – (*float64*) fluid rest-frame sound speed squared
* **use_tabulated_w** – (*boolean*) using an interpolated tabulated w(a) rather than w, wa above
#### set_params(w=-1.0, wa=0, cs2=1.0)
> Set the parameters so that P(a)/rho(a) = w(a) = w + (1-a)\*wa
* **Parameters:**
* **w** – w(0)
* **wa** – -dw/da(0)
* **cs2** – fluid rest-frame sound speed squared
#### set_w_a_table(a, w) → [DarkEnergyEqnOfState](#camb.dark_energy.DarkEnergyEqnOfState)
Set w(a) from numerical values (used as cubic spline). Note this is quite slow.
* **Parameters:**
* **a** – array of scale factors
* **w** – array of w(a)
* **Returns:**
self
### *class* camb.dark_energy.DarkEnergyFluid(\*args, \*\*kwargs)
Bases: [`DarkEnergyEqnOfState`](#camb.dark_energy.DarkEnergyEqnOfState)
Class implementing the w, wa or splined w(a) parameterization using the constant sound-speed single fluid model
(as for single-field quintessence).
#### set_w_a_table(a, w) → [DarkEnergyEqnOfState](#camb.dark_energy.DarkEnergyEqnOfState)
Set w(a) from numerical values (used as cubic spline). Note this is quite slow.
* **Parameters:**
* **a** – array of scale factors
* **w** – array of w(a)
* **Returns:**
self
### *class* camb.dark_energy.DarkEnergyPPF(\*args, \*\*kwargs)
Bases: [`DarkEnergyEqnOfState`](#camb.dark_energy.DarkEnergyEqnOfState)
Class implementing the w, wa or splined w(a) parameterization in the PPF perturbation approximation
([arXiv:0808.3125](https://arxiv.org/abs/0808.3125))
Use inherited methods to set parameters or interpolation table.
Note PPF is not a physical model and just designed to allow crossing -1 in an ad hoc smooth way. For models
with w>-1 but far from cosmological constant, it can give quite different answers to the fluid model with c_s^2=1.
### *class* camb.dark_energy.Quintessence(\*args, \*\*kwargs)
Bases: [`DarkEnergyModel`](#camb.dark_energy.DarkEnergyModel)
Abstract base class for single scalar field quintessence models.
For each model the field value and derivative are stored and splined at sampled scale factor values.
To implement a new model, need to define a new derived class in Fortran,
defining Vofphi and setting up initial conditions and interpolation tables (see TEarlyQuintessence as example).
* **Variables:**
* **DebugLevel** – (*integer*)
* **astart** – (*float64*)
* **integrate_tol** – (*float64*)
* **sampled_a** – (*float64 array*)
* **phi_a** – (*float64 array*)
* **phidot_a** – (*float64 array*)
### *class* camb.dark_energy.EarlyQuintessence(\*args, \*\*kwargs)
Bases: [`Quintessence`](#camb.dark_energy.Quintessence)
Example early quintessence (axion-like, as [arXiv:1908.06995](https://arxiv.org/abs/1908.06995)) with potential
> V(phi) = m^2f^2 (1 - cos(phi/f))^n + Lambda_{cosmological constant}
* **Variables:**
* **n** – (*float64*) power index for potential
* **f** – (*float64*) f/Mpl (sqrt(8piG)f); only used for initial search value when use_zc is True
* **m** – (*float64*) mass parameter in reduced Planck mass units; only used for initial search value when use_zc is True
* **theta_i** – (*float64*) phi/f initial field value
* **frac_lambda0** – (*float64*) fraction of dark energy in cosmological constant today (approximated as 1)
* **use_zc** – (*boolean*) solve for f, m to get specific critical redshift zc and fde_zc
* **zc** – (*float64*) redshift of peak fractional early dark energy density
* **fde_zc** – (*float64*) fraction of early dark energy density to total at peak
* **npoints** – (*integer*) number of points for background integration spacing
* **min_steps_per_osc** – (*integer*) minimum number of steps per background oscillation scale
* **fde** – (*float64 array*) after initialized, the calculated background early dark energy fractions at sampled_a
### *class* camb.dark_energy.AxionEffectiveFluid(\*args, \*\*kwargs)
Bases: [`DarkEnergyModel`](#camb.dark_energy.DarkEnergyModel)
Example implementation of a specific (early) dark energy fluid model
([arXiv:1806.10608](https://arxiv.org/abs/1806.10608)).
Not well tested, but should serve to demonstrate how to make your own custom classes.
* **Variables:**
* **w_n** – (*float64*) effective equation of state parameter
* **fde_zc** – (*float64*) energy density fraction at z=zc
* **zc** – (*float64*) decay transition redshift (not same as peak of energy density fraction)
* **theta_i** – (*float64*) initial condition field value
## https://camb.readthedocs.io/en/latest/initialpower.html
# Initial power spectra
### *class* camb.initialpower.InitialPower(\*args, \*\*kwargs)
Abstract base class for initial power spectrum classes
### *class* camb.initialpower.InitialPowerLaw(\*args, \*\*kwargs)
Bases: [`InitialPower`](#camb.initialpower.InitialPower)
Object to store parameters for the primordial power spectrum in the standard power law expansion.
* **Variables:**
* **tensor_parameterization** – (integer/string, one of: tensor_param_indeptilt, tensor_param_rpivot, tensor_param_AT)
* **ns** – (*float64*)
* **nrun** – (*float64*)
* **nrunrun** – (*float64*)
* **nt** – (*float64*)
* **ntrun** – (*float64*)
* **r** – (*float64*)
* **pivot_scalar** – (*float64*)
* **pivot_tensor** – (*float64*)
* **As** – (*float64*)
* **At** – (*float64*)
#### has_tensors()
Do these settings have non-zero tensors?
* **Returns:**
True if non-zero tensor amplitude
#### set_params(As=2e-09, ns=0.96, nrun=0, nrunrun=0.0, r=0.0, nt=None, ntrun=0.0, pivot_scalar=0.05, pivot_tensor=0.05, parameterization='tensor_param_rpivot')
Set parameters using standard power law parameterization. If nt=None, uses inflation consistency relation.
* **Parameters:**
* **As** – comoving curvature power at k=pivot_scalar ($A_s$)
* **ns** – scalar spectral index $n_s$
* **nrun** – running of scalar spectral index $d n_s/d \log k$
* **nrunrun** – running of running of spectral index, $d^2 n_s/d (\log k)^2$
* **r** – tensor to scalar ratio at pivot
* **nt** – tensor spectral index $n_t$. If None, set using inflation consistency
* **ntrun** – running of tensor spectral index
* **pivot_scalar** – pivot scale for scalar spectrum
* **pivot_tensor** – pivot scale for tensor spectrum
* **parameterization** – See CAMB notes. One of
- tensor_param_indeptilt = 1
- tensor_param_rpivot = 2
- tensor_param_AT = 3
* **Returns:**
self
### *class* camb.initialpower.SplinedInitialPower(\*args, \*\*kwargs)
Bases: [`InitialPower`](#camb.initialpower.InitialPower)
Object to store a generic primordial spectrum set from a set of sampled k_i, P(k_i) values
* **Variables:**
**effective_ns_for_nonlinear** – (*float64*) Effective n_s to use for approximate non-linear correction models
#### has_tensors()
Is the tensor spectrum set?
* **Returns:**
True if tensors
#### set_scalar_log_regular(kmin, kmax, PK)
Set log-regular cubic spline interpolation for P(k)
* **Parameters:**
* **kmin** – minimum k value (not minimum log(k))
* **kmax** – maximum k value (inclusive)
* **PK** – array of scalar power spectrum values, with PK[0]=P(kmin) and PK[-1]=P(kmax)
#### set_scalar_table(k, PK)
Set arrays of k and P(k) values for cubic spline interpolation.
Note that using [`set_scalar_log_regular()`](#camb.initialpower.SplinedInitialPower.set_scalar_log_regular) may be better
(faster, and easier to get fine enough spacing a low k)
* **Parameters:**
* **k** – array of k values (Mpc^{-1})
* **PK** – array of scalar power spectrum values
#### set_tensor_log_regular(kmin, kmax, PK)
Set log-regular cubic spline interpolation for tensor spectrum P_t(k)
* **Parameters:**
* **kmin** – minimum k value (not minimum log(k))
* **kmax** – maximum k value (inclusive)
* **PK** – array of scalar power spectrum values, with PK[0]=P_t(kmin) and PK[-1]=P_t(kmax)
#### set_tensor_table(k, PK)
Set arrays of k and P_t(k) values for cubic spline interpolation
* **Parameters:**
* **k** – array of k values (Mpc^{-1})
* **PK** – array of tensor power spectrum values
## https://camb.readthedocs.io/en/latest/nonlinear.html
# Non-linear models
### *class* camb.nonlinear.NonLinearModel(\*args, \*\*kwargs)
Abstract base class for non-linear correction models
* **Variables:**
**Min_kh_nonlinear** – (*float64*) minimum k/h at which to apply non-linear corrections
### *class* camb.nonlinear.Halofit(\*args, \*\*kwargs)
Bases: [`NonLinearModel`](#camb.nonlinear.NonLinearModel)
Various specific approximate non-linear correction models based on HaloFit.
* **Variables:**
* **halofit_version** – (integer/string, one of: original, bird, peacock, takahashi, mead, halomodel, casarini, mead2015, mead2016, mead2020, mead2020_feedback)
* **HMCode_A_baryon** – (*float64*) HMcode parameter A_baryon
* **HMCode_eta_baryon** – (*float64*) HMcode parameter eta_baryon
* **HMCode_logT_AGN** – (*float64*) HMcode parameter log10(T_AGN/K)
#### set_params(halofit_version='mead2020', HMCode_A_baryon=3.13, HMCode_eta_baryon=0.603, HMCode_logT_AGN=7.8)
Set the halofit model for non-linear corrections.
* **Parameters:**
* **halofit_version** –
One of
- original: [astro-ph/0207664](https://arxiv.org/abs/astro-ph/0207664)
- bird: [arXiv:1109.4416](https://arxiv.org/abs/1109.4416)
- peacock: [Peacock fit](http://www.roe.ac.uk/~jap/haloes/)
- takahashi: [arXiv:1208.2701](https://arxiv.org/abs/1208.2701)
- mead: HMCode [arXiv:1602.02154](https://arxiv.org/abs/1602.02154)
- halomodel: basic halomodel
- casarini: PKequal [arXiv:0810.0190](https://arxiv.org/abs/0810.0190), [arXiv:1601.07230](https://arxiv.org/abs/1601.07230)
- mead2015: original 2015 version of HMCode [arXiv:1505.07833](https://arxiv.org/abs/1505.07833)
- mead2016: Alias for ‘mead’.
- mead2020: 2020 version of HMcode [arXiv:2009.01858](https://arxiv.org/abs/2009.01858)
- mead2020_feedback: 2020 version of HMcode with baryonic feedback [arXiv:2009.01858](https://arxiv.org/abs/2009.01858)
* **HMCode_A_baryon** – HMcode parameter A_baryon. Default 3.13.
Used only in models mead2015 and mead2016 (and its alias mead).
* **HMCode_eta_baryon** – HMcode parameter eta_baryon. Default 0.603.
Used only in mead2015 and mead2016 (and its alias mead).
* **HMCode_logT_AGN** – HMcode parameter logT_AGN. Default 7.8. Used only in model mead2020_feedback.
### *class* camb.nonlinear.SecondOrderPK(\*args, \*\*kwargs)
Bases: [`NonLinearModel`](#camb.nonlinear.NonLinearModel)
Third-order Newtonian perturbation theory results for the non-linear correction.
Only intended for use at very high redshift (z>10) where corrections are perturbative, it will not give
sensible results at low redshift.
See Appendix F of [astro-ph/0702600](https://arxiv.org/abs/astro-ph/0702600) for equations and references.
Not intended for production use, it’s mainly to serve as an example alternative non-linear model implementation.
## https://camb.readthedocs.io/en/latest/reionization.html
# Reionization models
### *class* camb.reionization.ReionizationModel(\*args, \*\*kwargs)
Abstract base class for reionization models.
* **Variables:**
**Reionization** – (*boolean*) Is there reionization? (can be off for matter power, which is independent of it)
### *class* camb.reionization.BaseTauWithHeReionization(\*args, \*\*kwargs)
Bases: [`ReionizationModel`](#camb.reionization.ReionizationModel)
Abstract class for models that map z_re to tau, and include second reionization of Helium
* **Variables:**
* **use_optical_depth** – (*boolean*) Whether to use the optical depth or redshift parameters
* **redshift** – (*float64*) Reionization redshift (xe=0.5) if use_optical_depth=False
* **optical_depth** – (*float64*) Optical depth if use_optical_depth=True
* **fraction** – (*float64*) Reionization fraction when complete, or -1 for full ionization of hydrogen and first ionization of helium.
* **include_helium_fullreion** – (*boolean*) Whether to include second reionization of helium
* **helium_redshift** – (*float64*) Redshift for second reionization of helium
* **helium_delta_redshift** – (*float64*) Width in redshift for second reionization of helium
* **helium_redshiftstart** – (*float64*) Include second helium reionization below this redshift
* **tau_solve_accuracy_boost** – (*float64*) Accuracy boosting parameter for solving for z_re from tau
* **timestep_boost** – (*float64*) Accuracy boosting parameter for the minimum number of time sampling steps through reionization
* **max_redshift** – (*float64*) Maximum redshift allowed when mapping tau into reionization redshift
#### get_zre(params, tau=None)
Get the midpoint redshift of reionization.
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance with cosmological parameters
* **tau** – if set, calculate the redshift for optical depth tau, otherwise uses currently set parameters
* **Returns:**
reionization mid-point redshift
#### set_extra_params(max_zrei=None)
Set extra parameters (not tau, or zrei)
* **Parameters:**
**max_zrei** – maximum redshift allowed when mapping tau into reionization redshift
#### set_tau(tau)
Set the optical depth
* **Parameters:**
**tau** – optical depth
* **Returns:**
self
#### set_zrei(zrei)
Set the mid-point reionization redshift
* **Parameters:**
**zrei** – mid-point redshift
* **Returns:**
self
### *class* camb.reionization.TanhReionization(\*args, \*\*kwargs)
Bases: [`BaseTauWithHeReionization`](#camb.reionization.BaseTauWithHeReionization)
This default (unphysical) tanh x_e parameterization is described in
Appendix B of [arXiv:0804.3865](https://arxiv.org/abs/0804.3865)
* **Variables:**
**delta_redshift** – (*float64*) Duration of reionization
#### set_extra_params(deltazrei=None, max_zrei=None) → [None](https://docs.python.org/3/library/constants.html#None)
Set extra parameters (not tau, or zrei)
* **Parameters:**
* **deltazrei** – delta z for reionization
* **max_zrei** – maximum redshift allowed when mapping tau into reionization
### *class* camb.reionization.ExpReionization(\*args, \*\*kwargs)
Bases: [`BaseTauWithHeReionization`](#camb.reionization.BaseTauWithHeReionization)
An ionization fraction that decreases exponentially at high z, saturating to fully ionized at fixed redshift.
This model has a minimum non-zero tau around 0.04 for reion_redshift_complete=6.1.
Similar to e.g. arXiv:1509.02785, arXiv:2006.16828, but not attempting to fit shape near x_e~1 at z<6.1
* **Variables:**
* **reion_redshift_complete** – (*float64*) end of reionization
* **reion_exp_smooth_width** – (*float64*) redshift scale to smooth exponential
* **reion_exp_power** – (*float64*) power in exponential, exp(-lambda(z-redshift_complete)^exp_power)
#### set_extra_params(reion_redshift_complete=None, reion_exp_power=None, reion_exp_smooth_width=None, max_zrei=None) → [None](https://docs.python.org/3/library/constants.html#None)
Set extra parameters (not tau, or zrei)
* **Parameters:**
* **reion_redshift_complete** – redshift at which reionization complete (e.g. around 6)
* **reion_exp_power** – power in exponential decay with redshift
* **reion_exp_smooth_width** – smoothing parameter to keep derivative smooth
* **max_zrei** – maximum redshift allowed when mapping tau into reionization
## https://camb.readthedocs.io/en/latest/recombination.html
# Recombination models
### *class* camb.recombination.RecombinationModel(\*args, \*\*kwargs)
Abstract base class for recombination models
* **Variables:**
**min_a_evolve_Tm** – (*float64*) minimum scale factor at which to solve matter temperature perturbation if evolving sound speed or ionization fraction perturbations
### *class* camb.recombination.Recfast(\*args, \*\*kwargs)
Bases: [`RecombinationModel`](#camb.recombination.RecombinationModel)
RECFAST recombination model (see recfast source for details).
* **Variables:**
* **RECFAST_fudge** – (*float64*)
* **RECFAST_fudge_He** – (*float64*)
* **RECFAST_Heswitch** – (*integer*)
* **RECFAST_Hswitch** – (*boolean*)
* **AGauss1** – (*float64*)
* **AGauss2** – (*float64*)
* **zGauss1** – (*float64*)
* **zGauss2** – (*float64*)
* **wGauss1** – (*float64*)
* **wGauss2** – (*float64*)
### *class* camb.recombination.CosmoRec(\*args, \*\*kwargs)
Bases: [`RecombinationModel`](#camb.recombination.RecombinationModel)
[CosmoRec](https://www.jb.man.ac.uk/~jchluba/Science/CosmoRec/CosmoRec.html) recombination model.
To use this, the library must be built with CosmoRec installed and RECOMBINATION_FILES including cosmorec
in the Makefile.
CosmoRec must be built with -fPIC added to the compiler flags.
* **Variables:**
* **runmode** – (*integer*) Default 0, with diffusion; 1: without diffusion; 2: RECFAST++, 3: RECFAST++ run with correction
* **fdm** – (*float64*) Dark matter annihilation efficiency
* **accuracy** – (*float64*) 0-normal, 3-most accurate
### *class* camb.recombination.HyRec(\*args, \*\*kwargs)
Bases: [`RecombinationModel`](#camb.recombination.RecombinationModel)
[HyRec](https://github.com/nanoomlee/HYREC-2) recombination model.
To use this, the library must be built with HyRec installed and RECOMBINATION_FILES including hyrec in the Makefile.
## https://camb.readthedocs.io/en/latest/sources.html
# Source windows functions
### *class* camb.sources.SourceWindow(\*args, \*\*kwargs)
Abstract base class for a number count/lensing/21cm source window function.
A list of instances of these classes can be assigned to the SourceWindows field of [`model.CAMBparams`](model.md#camb.model.CAMBparams).
Note that source windows can currently only be used in flat models.
* **Variables:**
* **source_type** – (integer/string, one of: 21cm, counts, lensing)
* **bias** – (*float64*)
* **dlog10Ndm** – (*float64*)
### *class* camb.sources.GaussianSourceWindow(\*args, \*\*kwargs)
Bases: [`SourceWindow`](#camb.sources.SourceWindow)
A Gaussian W(z) source window function.
* **Variables:**
* **redshift** – (*float64*)
* **sigma** – (*float64*)
### *class* camb.sources.SplinedSourceWindow(\*args, \*\*kwargs)
Bases: [`SourceWindow`](#camb.sources.SourceWindow)
A numerical W(z) source window function constructed by interpolation from a numerical table.
#### set_table(z, W, bias_z=None, k_bias=None, bias_kz=None)
Set arrays of z and W(z) for cubic spline interpolation. Note that W(z) is the total count distribution
observed, not a fractional selection function on an underlying distribution.
* **Parameters:**
* **z** – array of redshift values (monotonically increasing)
* **W** – array of window function values. It must be well enough sampled to smoothly cubic-spline interpolate
* **bias_z** – optional array of bias values at each z for scale-independent bias
* **k_bias** – optional array of k values for bias (Mpc^-1)
* **bias_kz** – optional 2D contiguous array for space-dependent bias(k, z).
Must ensure range of k is large enough to cover required values.
## https://camb.readthedocs.io/en/latest/correlations.html
# Correlation functions
Functions to transform CMB angular power spectra into correlation functions (cl2corr)
and vice versa (corr2cl), and calculate lensed power spectra from unlensed ones.
The lensed power spectrum functions are not intended to replace those calculated by default when getting CAMB results,
but may be useful for tests, e.g. using different lensing potential power spectra, partially-delensed
lensing power spectra, etc.
These functions are all pure python/scipy, and operate and return cls including
factors $\ell(\ell+1)/2\pi$ (for CMB) and $[L(L+1)]^2/2\pi$ (for lensing).
1. Lewis December 2016
### camb.correlations.cl2corr(cls, xvals, lmax=None)
Get the correlation function from the power spectra, evaluated at points cos(theta) = xvals.
Use roots of Legendre polynomials (np.polynomial.legendre.leggauss) for accurate back integration with corr2cl.
Note currently does not work at xvals=1 (can easily calculate that as special case!).
* **Parameters:**
* **cls** – 2D array cls(L,ix), with L ($\equiv \ell$) starting at zero and ix-0,1,2,3 in
order TT, EE, BB, TE. cls should include $\ell(\ell+1)/2\pi$ factors.
* **xvals** – array of $\cos(\theta)$ values at which to calculate correlation function.
* **lmax** – optional maximum L to use from the cls arrays
* **Returns:**
2D array of corrs[i, ix], where ix=0,1,2,3 are T, Q+U, Q-U and cross
### camb.correlations.corr2cl(corrs, xvals, weights, lmax)
Transform from correlation functions to power spectra.
Note that using cl2corr followed by corr2cl is generally very accurate (< 1e-5 relative error) if
xvals, weights = np.polynomial.legendre.leggauss(lmax+1)
* **Parameters:**
* **corrs** – 2D array, corrs[i, ix], where ix=0,1,2,3 are T, Q+U, Q-U and cross
* **xvals** – values of $\cos(\theta)$ at which corrs stores values
* **weights** – weights for integrating each point in xvals. Typically from np.polynomial.legendre.leggauss
* **lmax** – maximum $\ell$ to calculate $C_\ell$
* **Returns:**
array of power spectra, cl[L, ix], where L starts at zero and ix=0,1,2,3 in order TT, EE, BB, TE.
They include $\ell(\ell+1)/2\pi$ factors.
### camb.correlations.gauss_legendre_correlation(cls, lmax=None, sampling_factor=1)
Transform power spectrum cls into correlation functions evaluated at the
roots of the Legendre polynomials for Gauss-Legendre quadrature. Returns correlation function array,
evaluation points and weights.
Result can be passed to corr2cl for accurate back transform.
* **Parameters:**
* **cls** – 2D array cls(L,ix), with L ($\equiv \ell$) starting at zero and ix=0,1,2,3 in
order TT, EE, BB, TE. Should include $\ell(\ell+1)/2\pi$ factors.
* **lmax** – optional maximum L to use
* **sampling_factor** – uses Gauss-Legendre with degree lmax\*sampling_factor+1
* **Returns:**
corrs, xvals, weights; corrs[i, ix] is 2D array where ix=0,1,2,3 are T, Q+U, Q-U and cross
### camb.correlations.legendre_funcs(lmax, x, m=(0, 2), lfacs=None, lfacs2=None, lrootfacs=None)
Utility function to return array of Legendre and $d_{mn}$ functions for all $\ell$ up to lmax.
Note that $d_{mn}$ arrays start at $\ell_{\rm min} = \max(m,n)$, so returned arrays are different sizes
* **Parameters:**
* **lmax** – maximum $\ell$
* **x** – scalar value of $\cos(\theta)$ at which to evaluate
* **m** – m values to calculate $d_{m,n}$, etc. as relevant
* **lfacs** – optional pre-computed $\ell(\ell+1)$ float array
* **lfacs2** – optional pre-computed $(\ell+2)*(\ell-1)$ float array
* **lrootfacs** – optional pre-computed sqrt(lfacs\*lfacs2) array
* **Returns:**
$(P,P'),(d_{11},d_{-1,1}), (d_{20}, d_{22}, d_{2,-2})$ as requested, where P starts
at $\ell=0$, but spin functions start at $\ell=\ell_{\rm min}$
### camb.correlations.lensed_cl_derivative_unlensed(clpp, lmax=None, theta_max=0.09817477042468103, apodize_point_width=10, sampling_factor=1.4)
Get derivative dcl of lensed minus unlensed power $D_\ell \equiv \ell(\ell+1)\Delta C_\ell/2\pi$ with respect
to $\ell(\ell+1)C^{\rm unlens}_\ell/2\pi$
The difference in power in the lensed spectrum is given by dCL[ix, :, :].dot(cl),
where cl is the appropriate $\ell(\ell+1)C^{\rm unlens}_\ell/2\pi$.
Uses the non-perturbative curved-sky results from Eqs 9.12 and 9.16-9.18 of
[astro-ph/0601594](https://arxiv.org/abs/astro-ph/0601594), to second order in $C_{{\rm gl},2}$
* **Parameters:**
* **clpp** – array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum (zero based)
* **lmax** – optional maximum L to use from the clpp array
* **theta_max** – maximum angle (in radians) to keep in the correlation functions
* **apodize_point_width** – if theta_max is set, apodize around the cut using half Gaussian of approx
width apodize_point_width/lmax\*pi
* **sampling_factor** – npoints = int(sampling_factor\*lmax)+1
* **Returns:**
array dCL[ix, ell, L], where ix=0,1,2,3 are TT, EE, BB, TE and result is
$d\left(\Delta D^{\rm ix}_\ell\right) / d D^{{\rm unlens},j}_L$ where j[ix] are TT, EE, EE, TE
### camb.correlations.lensed_cl_derivatives(cls, clpp, lmax=None, theta_max=0.09817477042468103, apodize_point_width=10, sampling_factor=1.4)
Get derivative dcl of lensed $D_\ell\equiv \ell(\ell+1)C_\ell/2\pi$ with respect to $\log(C^{\phi}_L)$.
To leading order (and hence not actually accurate), the lensed correction to power spectrum is
is given by dcl[ix,:,:].dot(np.ones(clpp.shape)).
Uses the non-perturbative curved-sky results from Eqs 9.12 and 9.16-9.18 of
[astro-ph/0601594](https://arxiv.org/abs/astro-ph/0601594), to second order in $C_{{\rm gl},2}$
* **Parameters:**
* **cls** – 2D array of unlensed cls(L,ix), with L starting at zero and ix=0,1,2,3 in order TT, EE, BB, TE.
cls should include $\ell(\ell+1)/2\pi$ factors.
* **clpp** – array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum (zero based)
* **lmax** – optional maximum L to use from the cls arrays
* **theta_max** – maximum angle (in radians) to keep in the correlation functions
* **apodize_point_width** – if theta_max is set, apodize around the cut using half Gaussian of approx
width apodize_point_width/lmax\*pi
* **sampling_factor** – npoints = int(sampling_factor\*lmax)+1
* **Returns:**
array dCL[ix, ell, L], where ix=0,1,2,3 are T, EE, BB, TE and result
is $d[D^{\rm ix}_\ell]/ d (\log C^{\phi}_L)$
### camb.correlations.lensed_cls(cls, clpp, lmax=None, lmax_lensed=None, sampling_factor=1.4, delta_cls=False, theta_max=0.09817477042468103, apodize_point_width=10, leggaus=True, cache=True)
Get the lensed power spectra from the unlensed power spectra and the lensing potential power.
Uses the non-perturbative curved-sky results from Eqs 9.12 and 9.16-9.18 of
[astro-ph/0601594](https://arxiv.org/abs/astro-ph/0601594), to second order in $C_{{\rm gl},2}$.
Correlations are calculated for Gauss-Legendre integration if leggaus=True; this slows it by several seconds,
but will be must faster on subsequent calls with the same lmax\*sampling_factor.
If Gauss-Legendre is not used, sampling_factor needs to be about 2 times larger for same accuracy.
For a reference implementation with the full integral range and no apodization set theta_max=None.
Note that this function does not pad high $\ell$ with a smooth fit (like CAMB’s main functions); for
accurate results should be called with lmax high enough that input cls are effectively band limited
(lmax >= 2500, or higher for accurate BB to small scales).
Usually lmax truncation errors are far larger than other numerical errors for lmax<4000.
For a faster result use get_lensed_cls_with_spectrum.
* **Parameters:**
* **cls** – 2D array of unlensed cls(L,ix), with L starting at zero and ix=0,1,2,3 in order TT, EE, BB, TE.
cls should include $\ell(\ell+1)/2\pi$ factors.
* **clpp** – array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum (zero based)
* **lmax** – optional maximum L to use from the cls arrays
* **lmax_lensed** – optional maximum L for the returned cl array (lmax_lensed <= lmax)
* **sampling_factor** – npoints = int(sampling_factor\*lmax)+1
* **delta_cls** – if true, return the difference between lensed and unlensed (optional, default False)
* **theta_max** – maximum angle (in radians) to keep in the correlation functions; default: pi/32
* **apodize_point_width** – if theta_max is set, apodize around the cut using half Gaussian of approx
width apodize_point_width/lmax\*pi
* **leggaus** – whether to use Gauss-Legendre integration (default True)
* **cache** – if leggaus = True, set cache to save the x values and weights between calls (most of the time)
* **Returns:**
2D array of cls[L, ix], with L starting at zero and ix=0,1,2,3 in order TT, EE, BB, TE.
cls include $\ell(\ell+1)/2\pi$ factors.
### camb.correlations.lensed_correlations(cls, clpp, xvals, weights=None, lmax=None, delta=False, theta_max=None, apodize_point_width=10)
Get the lensed correlation function from the unlensed power spectra, evaluated at
points $\cos(\theta)$ = xvals.
Use roots of Legendre polynomials (np.polynomial.legendre.leggauss) for accurate back integration with corr2cl.
Note currently does not work at xvals=1 (can easily calculate that as special case!).
To get the lensed cls efficiently, set weights to the integral weights for each x value, then function returns
lensed correlations and lensed cls.
Uses the non-perturbative curved-sky results from Eqs 9.12 and 9.16-9.18 of
[astro-ph/0601594](https://arxiv.org/abs/astro-ph/0601594), to second order in $C_{{\rm gl},2}$
* **Parameters:**
* **cls** – 2D array of unlensed cls(L,ix), with L ($\equiv\ell$) starting at zero and ix=0,1,2,3 in
order TT, EE, BB, TE. cls should include $\ell(\ell+1)/2\pi$ factors.
* **clpp** – array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum (zero based)
* **xvals** – array of $\cos(\theta)$ values at which to calculate correlation function.
* **weights** – if given also return lensed $C_\ell$, otherwise just lensed correlations
* **lmax** – optional maximum L to use from the cls arrays
* **delta** – if true, calculate the difference between lensed and unlensed (default False)
* **theta_max** – maximum angle (in radians) to keep in the correlation functions
* **apodize_point_width** – smoothing scale for apodization at truncation of correlation function
* **Returns:**
2D array of corrs[i, ix], where ix=0,1,2,3 are T, Q+U, Q-U and cross;
if weights is not None, then return corrs, lensed_cls
### camb.correlations.lensing_R(clpp, lmax=None)
Get $R \equiv \frac{1}{2} \langle |\nabla \phi|^2\rangle$
* **Parameters:**
* **clpp** – array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum
* **lmax** – optional maximum L to use from the cls arrays
* **Returns:**
R
### camb.correlations.lensing_correlations(clpp, xvals, lmax=None)
Get the $\sigma^2(x)$ and $C_{{\rm gl},2}(x)$ functions from the lensing power spectrum
* **Parameters:**
* **clpp** – array of $[L(L+1)]^2 C_L^{\phi\phi}/2\pi$ lensing potential power spectrum (zero based)
* **xvals** – array of $\cos(\theta)$ values at which to calculate correlation function.
* **lmax** – optional maximum L to use from the clpp array
* **Returns:**
array of $\sigma^2(x)$, array of $C_{{\rm gl},2}(x)$
## https://camb.readthedocs.io/en/latest/postborn.html
# Post-Born lensing
### camb.postborn.get_field_rotation_BB(params, lmax=None, acc=1, CMB_unit='muK', raw_cl=False, spline=True)
Get the B-mode power spectrum from field post-born field rotation, based on perturbative and Limber approximations.
See [arXiv:1605.05662](https://arxiv.org/abs/1605.05662).
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance with cosmological parameters etc.
* **lmax** – maximum $\ell$
* **acc** – accuracy
* **CMB_unit** – units for CMB output relative to dimensionless
* **raw_cl** – return $C_\ell$ rather than $\ell(\ell+1)C_\ell/2\pi$
* **spline** – return InterpolatedUnivariateSpline, otherwise return tuple of lists of $\ell$
and $C_\ell$
* **Returns:**
InterpolatedUnivariateSpline (or arrays of sampled $\ell$ and) $\ell^2 C_\ell^{BB}/(2 \pi)$
(unless raw_cl, in which case just $C_\ell^{BB}$)
### camb.postborn.get_field_rotation_power(params, kmax=100, lmax=20000, non_linear=True, z_source=None, k_per_logint=None, acc=1, lsamp=None)
Get field rotation power spectrum, $C_L^{\omega\omega}$,
following [arXiv:1605.05662](https://arxiv.org/abs/1605.05662). Uses the lowest Limber approximation.
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance with cosmological parameters etc.
* **kmax** – maximum k (in ${\rm Mpc}^{-1}$ units)
* **lmax** – maximum L
* **non_linear** – include non-linear corrections
* **z_source** – redshift of source. If None, use peak of CMB visibility for CMB lensing
* **k_per_logint** – sampling to use in k
* **acc** – accuracy setting, increase to test stability
* **lsamp** – array of L values to compute output at. If not set, set to sampling good for interpolation
* **Returns:**
$L$, $C_L^{\omega\omega}$: the L sample values and corresponding rotation power
## https://camb.readthedocs.io/en/latest/emission_angle.html
# Lensing emission angle
This module calculates the corrections to the standard lensed CMB power spectra results due to time delay and
emission angle, following [arXiv:1706.02673](https://arxiv.org/abs/1706.02673). This can be combined with the result
from the postborn module to estimate the leading corrections to the standard lensing B modes.
Corrections to T and E are negligible, and not calculated. The result for BB includes approximately contributions
from reionization, but this can optionally be turned off.
### camb.emission_angle.get_emission_angle_powers(camb_background, PK, chi_source, lmax=3000, acc=1, lsamp=None)
Get the power spectrum of $\psi_d$, the potential for the emission angle, and its cross with standard lensing.
Uses the Limber approximation (and assumes flat universe).
* **Parameters:**
* **camb_background** – a CAMB results object, used for calling background functions
* **PK** – a matter power spectrum interpolator (from camb.get_matter_power_interpolator)
* **chi_source** – comoving radial distance of source in Mpc
* **lmax** – maximum L
* **acc** – accuracy parameter
* **lsamp** – L sampling for the result
* **Returns:**
a InterpolatedUnivariateSpline object containing $L(L+1) C_L$
### camb.emission_angle.get_emission_delay_BB(params, kmax=100, lmax=3000, non_linear=True, CMB_unit='muK', raw_cl=False, acc=1, lsamp=None, return_terms=False, include_reionization=True)
Get B modes from emission angle and time delay effects.
Uses full-sky result from appendix of [arXiv:1706.02673](https://arxiv.org/abs/1706.02673)
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance with cosmological parameters etc.
* **kmax** – maximum k (in ${\rm Mpc}^{-1}$ units)
* **lmax** – maximum $\ell$
* **non_linear** – include non-linear corrections
* **CMB_unit** – normalization for the result
* **raw_cl** – if true return $C_\ell$, else $\ell(\ell+1)C_\ell/2\pi$
* **acc** – accuracy setting, increase to test stability
* **lsamp** – array of $\ell$ values to compute output at. If not set, set to sampling good for interpolation
* **return_terms** – return the three sub-terms separately rather than the total
* **include_reionization** – approximately include reionization terms by second scattering surface
* **Returns:**
InterpolatedUnivariateSpline for $C_\ell^{BB}$
### camb.emission_angle.get_source_cmb_cl(params, CMB_unit='muK')
Get the angular power spectrum of emission angle and time delay sources $\psi_t$, $\psi_\zeta$,
as well as the perpendicular velocity and E polarization.
All are returned with 1 and 2 versions, for recombination and reionization respectively.
Note that this function destroys any custom sources currently configured.
* **Parameters:**
* **params** – [`model.CAMBparams`](model.md#camb.model.CAMBparams) instance with cosmological parameters etc.
* **CMB_unit** – scale results from dimensionless, use ‘muK’ for $\mu K^2$ units
* **Returns:**
dictionary of power spectra, with $L(L+1)/2\pi$ factors.
## https://camb.readthedocs.io/en/latest/transfer_variables.html
# Matter power spectrum and matter transfer function variables
The various matter power spectrum functions, e.g. [`get_matter_power_interpolator()`](camb.md#camb.get_matter_power_interpolator), can calculate power
spectra for various quantities. Each variable used to form the power spectrum has a name as follows:
| name | number | description |
|--------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| k/h | 1 | $k/h$ |
| delta_cdm | 2 | $\Delta_c$, CDM density |
| delta_baryon | 3 | $\Delta_b$, baryon density |
| delta_photon | 4 | $\Delta_\gamma$, photon density |
| delta_neutrino | 5 | $\Delta_r$, for massless neutrinos |
| delta_nu | 6 | $\Delta_\nu$ for massive neutrinos |
| delta_tot | 7 | $\frac{\rho_c\Delta_c+\rho_b\Delta_b+\rho_\nu\Delta_\nu}{\rho_c+\rho_b+\rho_\nu}$,
CDM+baryons+massive neutrino density |
| delta_nonu | 8 | $\frac{\rho_c\Delta_c+\rho_b\Delta_b}{\rho_b+\rho_c}$, CDM+baryon density |
| delta_tot_de | 9 | $\frac{\rho_c\Delta_c+\rho_b\Delta_b+\rho_\nu\Delta_\nu +\rho_{ de}\Delta_{de}}{\rho_c+\rho_b+\rho_\nu}$,
CDM+baryons+massive neutrinos+ dark energy (numerator only) density |
| Weyl | 10 | $k^2\Psi\equiv k^2(\phi+\psi)/2$,
the Weyl potential scaled by $k^2$ to scale in $k$ like a density. |
| v_newtonian_cdm | 11 | $-v_{N,c}\, k/{\cal H}$ (where $v_{N,c}$ is the
Newtonian-gauge CDM velocity) |
| v_newtonian_baryon | 12 | $-v_{N,b}\,k/{\cal H}$ (Newtonian-gauge baryon velocity $v_{N,b}$) |
| v_baryon_cdm | 13 | $v_b-v_c$, relative baryon-CDM velocity |
The number here corresponds to a corresponding numerical index, in Fortran these are the same as *model.name*,
where *name* are the Transfer_xxx variable names:
Transfer_kh=1,Transfer_cdm=2, Transfer_b=3, Transfer_g=4, Transfer_r=5, Transfer_nu=6, Transfer_tot=7,
Transfer_nonu=8, Transfer_tot_de=9, Transfer_Weyl=10, Transfer_Newt_vel_cdm=11, Transfer_Newt_vel_baryon=12,
Transfer_vel_baryon_cdm = 13.
So for example, requesting var1=’delta_b’, var2=’Weyl’ or alternatively var1=model.Transfer_b, var2=model.Transfer_Weyl
would get the power spectrum for the cross-correlation of the baryon density with the Weyl potential.
All density variables $\Delta_i$ here are synchronous gauge.
For transfer function variables (rather than matter power spectra), the variables are normalized corresponding to
unit primordial curvature perturbation on super-horizon scales. The
[`get_matter_transfer_data()`](results.md#camb.results.CAMBdata.get_matter_transfer_data) function returns the above quantities
divided by $k^2$ (so they are roughly constant at low $k$ on super-horizon scales).
The [example notebook](https://camb.readthedocs.io/en/latest/CAMBdemo.html) has various examples of getting the
matter power spectrum, relating the Weyl-potential spectrum to lensing, and calculating the
baryon-dark matter relative velocity spectra. There is also an explicit example of how to calculate the matter
power spectrum manually from the matter transfer functions.
When generating dark-age 21cm power spectra (do21cm is set) the transfer functions are instead the *model.name*
variables (see equations 20 and 25 of [astro-ph/0702600](https://arxiv.org/abs/astro-ph/0702600))
| name | number | description |
|--------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Transfer_kh | 1 | $k/h$ |
| Transfer_cdm | 2 | $\Delta_c$, CDM density |
| Transfer_b | 3 | $\Delta_b$, baryon density |
| Transfer_monopole | 4 | $\Delta_s+(r_\tau-1)(\Delta_{b}-\Delta_{T_s})$, 21cm monopole source |
| Transfer_vnewt | 5 | $r_\tau kv_{N,b}/\mathcal{H}$, 21cm Newtonian-gauge velocity source |
| Transfer_Tmat | 6 | $\Delta_{T_m}$, matter temperature perturbation |
| Transfer_tot | 7 | $\frac{\rho_c\Delta_c+\rho_b\Delta_b+\rho_\nu\Delta_\nu}{\rho_c+\rho_b+\rho_\nu}$,
CDM+baryons+massive neutrino density |
| Transfer_nonu | 8 | $\frac{\rho_c\Delta_c+\rho_b\Delta_b}{\rho_b+\rho_c}$, CDM+baryon density |
| Transfer_tot_de | 9 | $\frac{\rho_c\Delta_c+\rho_b\Delta_b+\rho_\nu\Delta_\nu +\rho_{ de}\Delta_{de}}{\rho_c+\rho_b+\rho_\nu}$,
CDM+baryons+massive neutrinos+ dark energy (numerator only) density |
| Transfer_Weyl | 10 | $k^2\Psi\equiv k^2(\phi+\psi)/2$,
the Weyl potential scaled by $k^2$ to scale in $k$ like a density. |
| Transfer_Newt_vel_cdm | 11 | $-v_{N,c}\, k/{\cal H}$ (where $v_{N,c}$ is the
Newtonian-gauge CDM velocity) |
| Transfer_Newt_vel_baryon | 12 | $-v_{N,b}\,k/{\cal H}$ (Newtonian-gauge baryon velocity $v_{N,b}$) |
| Transfer_vel_baryon_cdm | 13 | $v_b-v_c$, relative baryon-CDM velocity |
If use_21cm_mK is set the 21cm results are multiplied by $T_b$ to give results in mK units.
## https://camb.readthedocs.io/en/latest/modifying_code.html
# Modifying the code
Although CAMB supports some non-standard models by default (e.g. some early dark energy models), when you have a new
model you’ll generally need to modify the code. Simple cases that do not need code modification are:
- Dark energy fluid models with a given equation of state but constant sound speed (see [Dark Energy models](dark_energy.md))
- Different primordial power spectra (see [Initial power spectra](initialpower.md))
- Different BBN mappings for the Helium abundance (which is pure Python, see [BBN models](bbn.md))
In these cases, you can just pass in an interpolation table from Python to encapsulate the modified physics.
For understanding CAMB’s variable naming conventions and gauge choices when modifying the code, see the [CAMB Variables and Gauge Conventions](variables_guide.md).
## Defining new classes
For other changes to the dark energy, initial power, reionization, recombination, or non-linear correction, you can usually
define new classes that inherit from the standard base classes. The classes are defined in both Python and Fortran, so you
will need to modify both. Ensure variables are defined in the same order so that the Python interface works consistently.
The easiest way to do this is probably to look at the source code for, e.g., AxionEffectiveFluid in both Fortran and Python and follow the same pattern.
In Python, CAMB uses the @fortran_class decorator to implement the Fortran wrapping. This implements a general but custom mapping between
F2008 Fortran classes (types) and Python classes. For example, the AxionEffectiveFluid Python class looks like this:
```default
@fortran_class
class AxionEffectiveFluid(DarkEnergyModel):
_fields_ = [
("w_n", c_double, "effective equation of state parameter"),
("fde_zc", c_double, "energy density fraction at z=zc"),
("zc", c_double, "decay transition redshift (not the same as peak of energy density fraction)"),
("theta_i", c_double, "initial condition field value")
]
_fortran_class_name_ = 'TAxionEffectiveFluid'
_fortran_class_module_ = 'DarkEnergyFluid'
```
The \_fields_ and \_fortran_class_name_ class variables are special (metaclass) metadata for the Fortran mapping, so that it knows which class to map to and
what the corresponding variable names are. It is not necessary to include all Fortran variables in the \_fields_ array, but they must be in
the right order, and only missing items at the end.
In Fortran, the corresponding class is defined as:
```default
type, extends(TDarkEnergyModel) :: TAxionEffectiveFluid
real(dl) :: w_n = 1._dl ! Effective equation of state when oscillating
real(dl) :: fde_zc = 0._dl ! Energy density fraction at a_c (not the same as peak dark energy fraction)
real(dl) :: zc ! Transition redshift (scale factor a_c)
real(dl) :: theta_i = const_pi/2 ! Initial value
! om is Omega of the early DE component today (assumed to be negligible compared to omega_lambda)
! omL is the lambda component of the total dark energy omega
real(dl), private :: a_c, pow, om, omL, acpow, freq, n ! Cached internally
contains
procedure :: ReadParams => TAxionEffectiveFluid_ReadParams
procedure, nopass :: PythonClass => TAxionEffectiveFluid_PythonClass
procedure, nopass :: SelfPointer => TAxionEffectiveFluid_SelfPointer
procedure :: Init => TAxionEffectiveFluid_Init
procedure :: w_de => TAxionEffectiveFluid_w_de
procedure :: grho_de => TAxionEffectiveFluid_grho_de
procedure :: PerturbedStressEnergy => TAxionEffectiveFluid_PerturbedStressEnergy
procedure :: PerturbationEvolve => TAxionEffectiveFluid_PerturbationEvolve
end type TAxionEffectiveFluid
```
Here, the a_c, pow, etc., variables are not mapped to Python because they are only used in the Fortran code.
The rest of the Fortran code defines the relevant methods (TAxionEffectiveFluid_grho_de, etc.) to implement the modified physics.
All Fortran classes that map to Python must have a SelfPointer function as defined above. This always takes exactly the same form:
```default
subroutine TMyClass_SelfPointer(cptr, P)
use iso_c_binding
Type(c_ptr) :: cptr
Type (TMyClass), pointer :: PType
class (TPythonInterfacedClass), pointer :: P
call c_f_pointer(cptr, PType)
P => PType
end subroutine TMyClass_SelfPointer
```
and hence can be trivially modified for any new classes. The function is essential so that the Python wrapper can
map Python and strongly typed Fortran classes consistently (not something that is supported by standard iso_c_binding).
The ReadParams function is only needed if you want to also be able to load parameters from .ini files, rather than just
using them via Python. The PythonClass method is not strictly needed.
There are separate [fortran code docs](https://camb.info/doc/).
## Other code changes
For quintessence models with a different potential, you will need to modify the Fortran to define the new potential function.
See [`Quintessence`](dark_energy.md#camb.dark_energy.Quintessence) and the DarkEnergyQuintessence.f90 Fortran source. This may be a simple change,
though you may also need more complicated changes to consistently map input parameters into initial conditions for the evolution.
More generally, you will need to modify the equations at both the background and the perturbation level, usually in equations.f90.
The [CAMB notes](https://cosmologist.info/notes/CAMB.pdf) provide some guidance on conventions and variable definitions.
## Code updates, testing, and gotchas
Make sure you recompile the Fortran after making any changes (see [Fortran compilers](fortran_compilers.md)).
Changing the version number in both Python and Fortran will give you an automatic run-time check that the Python being run matches the
intended Fortran source.
The default accuracy parameters are designed for Simons Observatory-like precision for standard models. Check your results are stable to
increasing accuracy parameters AccuracyBoost and lAccuracyBoost (in [`AccuracyParams`](model.md#camb.model.AccuracyParams)). If not, changing specific accuracy parameters as needed may be much more efficient that using the high-level parameter
AccuracyBoost (which increases the accuracy of many things at once).
There are a number of possible gotchas when using Python-wrapped Fortran types. Firstly, types derived directly from CAMB_Structure are intended to map directly
to Fortran types (via the standard ctypes interface), for example, AccuracyParams is inherited directly from CAMB_Structure. These should generally not be
instantiated directly in Python as they are only intended to be used as sub-components of larger types. For example, a new Python instance of [`AccuracyParams`](model.md#camb.model.AccuracyParams) will
give a zero Fortran array, which does not correspond to the default values for the accuracy parameters.
Fortran-mapped classes in Python inherit from F2003Class. These also map data in a Fortran class type (the \_fields_ defined above).
If they are an allocatable subcomponent of another F2003Class, they may be created dynamically to match the underlying structure.
This can give unexpected results if you try to add variables to only the Python class. For example, if pars is a [`CAMBparams`](model.md#camb.model.CAMBparams) instance and test is not defined
then doing this:
```default
pars.DarkEnergy.test = 'x'
print(pars.DarkEnergy.test)
```
will not give you ‘x’; it will give you an undefined variable error. This is because the Python code doesn’t ‘know’ that the Fortran code is not modifying the
DarkEnergy structure, so pars.DarkEnergy is generating a new instance mapped to the underlying Fortran data whenever you access it.
You can avoid this by always defining fields in both Fortran and Python, or only using Python variables in container-level classes like [`CAMBparams`](model.md#camb.model.CAMBparams).
When using dark energy models, make sure you are not setting thetastar in Python before setting the dark energy parameters: it needs to know the dark
energy model to map thetastar into H0 consistently.
When accessing array-like members of a structure, e.g., CAMBparams.z_outputs, you may need to explicitly cast to a list to see the elements.
## Interfacing with Cobaya
The [Cobaya sampler](https://cobaya.readthedocs.org) can do parameter inference for your custom models. It uses introspection to determine which
variables the linked CAMB version supports, so if you add new variables e.g., to [`CAMBparams`](model.md#camb.model.CAMBparams) or as arguments to [`set_cosmology()`](model.md#camb.model.CAMBparams.set_cosmology) or the set_params
method of the dark energy, reionization, etc. classes, you should automatically be able to use them in Cobaya. For other new variables, you may need to modify [`get_valid_numerical_params()`](camb.md#camb.get_valid_numerical_params).
For supporting new primordial power spectra or multiple bins there are [test examples](https://github.com/CobayaSampler/cobaya/blob/master/tests/test_cosmo_multi_theory.py).
This also shows how to use get_class_options to dynamically define multiple parameters based on an input parameter.
You can only directly sample scalar parameters, but it is also easy to [map vector parameters](https://cobaya.readthedocs.io/en/latest/params_prior.html#vector-parameters).
Cobaya will automatically identify numerical arguments to the set_params
function of custom classes (e.g. dark energy), but for vector parameters to be picked up for sampling you need define
them with a default value of None.
The [CosmoCoffee](https://cosmocoffee.info/viewforum.php?f=11) discussion forum can be used to ask questions and to see previous answers.
## https://camb.readthedocs.io/en/latest/variables_guide.html
# CAMB Variables and Gauge Conventions
This page provides a comprehensive guide to the variable naming conventions used in CAMB and their relationship to different gauge choices, particularly the synchronous gauge. For detailed mathematical derivations and symbolic manipulation, see the [ScalEqs notebook](https://camb.readthedocs.io/en/latest/ScalEqs.html), the [Symbolic manipulation](symbolic.md) module documentation, and the [technical notes](https://cosmologist.info/notes/CAMB.pdf).
**Units Convention:** CAMB uses natural units where $c = 1$ and distances are measured in Mpc. The gravitational coupling is $\kappa = 8\pi G$ with units of $\text{Mpc}^{-2}$.
## Overview
CAMB uses a covariant perturbation formalism that can be expressed in different gauge choices. The code internally works in the CDM frame (equivalent to synchronous gauge) but provides functions to convert to other gauges like the Newtonian gauge. The variable naming follows specific conventions that encode both the physical quantity and the species.
## Key Physical Quantities
The fundamental perturbation variables in CAMB represent different aspects of the perturbed spacetime and matter fields:
**Metric Perturbations:**
* $\phi$ **(phi)**: Weyl potential - gauge-invariant gravitational potential
* $\eta$ **(eta)**: Three-curvature perturbation (CAMB variable: `etak`)
* $\sigma$ **(sigma)**: Shear perturbation
* $z$: Expansion rate perturbation
* $\dot{h}$ **(hdot)**: Time derivative of scale factor perturbation
* $A$: Acceleration perturbation
**Matter Perturbations:**
* $\delta$ **(delta)**: Density perturbation
* $q$: Heat flux (momentum density)
* $\Pi$ **(Pi)**: Anisotropic stress
## Variable Naming Conventions
CAMB uses systematic naming conventions for variables:
**Species Indices:**
: * `g` - photons
* `r` - massless neutrinos
* `c` - CDM (Cold Dark Matter)
* `b` - baryons
* `nu` - massive neutrinos
* `de` - dark energy
**Variable Prefixes:**
: * `clx` - fractional density perturbations ($\Delta_i = \delta\rho_i/\rho_i$)
* `q` - heat flux variables
* `v` - velocity perturbations
* `pi` - anisotropic stress components
* `grho` - background densities (with `g` prefix for “$\kappa$ times $\rho$”)
* `dg` - total (summed over species) perturbation quantities
## CAMB Fortran Variables
This shows the correspondence between symbolic variables (in CDM frame) and CAMB Fortran variables:
#### **Density Perturbations**
| Symbolic | CAMB Variable | Description |
|----------------|-----------------|---------------------------------------------------|
| $\Delta_c$ | `clxc` | CDM fractional density perturbation |
| $\Delta_b$ | `clxb` | Baryon fractional density perturbation |
| $\Delta_g$ | `clxg` | Photon fractional density perturbation |
| $\Delta_r$ | `clxr` | Massless neutrino fractional density perturbation |
| $\Delta_{\nu}$ | `clxnu` | Massive neutrino fractional density perturbation |
| $\Delta_{de}$ | `clxde` | Dark energy fractional density perturbation |
#### **Velocity and Heat Flux**
| Symbolic | CAMB Variable | Description |
|------------|-----------------|-----------------------------|
| $v_b$ | `vb` | Baryon velocity |
| $q_g$ | `qg` | Photon heat flux |
| $q_r$ | `qr` | Massless neutrino heat flux |
| $q_{\nu}$ | `qnu` | Massive neutrino heat flux |
#### **Anisotropic Stress**
| Symbolic | CAMB Variable | Description |
|-------------|-----------------|--------------------------------------|
| $\pi_g$ | `pig` | Photon anisotropic stress |
| $\pi_r$ | `pir` | Massless neutrino anisotropic stress |
| $\pi_{\nu}$ | `pinu` | Massive neutrino anisotropic stress |
#### **Metric and Total Quantities**
| Symbolic | CAMB Variable | Description |
|------------------|-----------------|---------------------------------------------------------------------|
| $\eta$ | `etak` | Three-curvature ($\mathrm{etak} = k\eta_s = -k\eta/2$ in CDM frame) |
| $\delta$ (total) | `dgrho` | Total density perturbation ($\kappa a^2 \sum_i \rho_i\Delta_i$) |
| $q$ (total) | `dgq` | Total heat flux ($\kappa a^2 \sum_i \rho_i q_i$) |
| $\Pi$ (total) | `dgpi` | Total anisotropic stress |
## Physical Interpretation
**Heat Flux Relations:**
For each species i, the heat flux $q_i$ is related to the velocity $v_i$ by:
$$
\rho_i q_i = (\rho_i + p_i)v_i
$$
This gives:
* For relativistic species (photons, massless neutrinos): $q_i = \frac{4}{3}v_i$
* For non-relativistic matter: $q_i \approx v_i$
**Density Perturbations:**
The fractional density perturbations $\Delta_i = \delta\rho_i/\rho_i$ are gauge-dependent. In synchronous gauge, they represent the fractional density contrast in the frame comoving with the CDM. Note the distinction: $\delta\rho_i$ is the absolute density perturbation, while $\Delta_i$ is the fractional (relative) density perturbation.
**Anisotropic Stress:**
The anisotropic stress $\pi_i$ represents the traceless part of the stress tensor and is gauge-invariant. It is important for:
* Photon polarization ($\pi_g$)
* Free-streaming neutrinos ($\pi_r$, $\pi_{\nu}$)
* Gravitational wave generation
## Background Variables
CAMB also defines background (unperturbed) quantities with specific naming:
#### **Background Densities and Pressures**
| Symbolic | CAMB Variable | Description |
|--------------|-----------------|--------------------------------------------------------------|
| $\rho_b$ | `grhob_t` | Baryon background density ($\kappa\rho_b a^2$) |
| $\rho_c$ | `grhoc_t` | CDM background density ($\kappa\rho_c a^2$) |
| $\rho_g$ | `grhog_t` | Photon background density ($\kappa\rho_g a^2$) |
| $\rho_r$ | `grhor_t` | Massless neutrino background density ($\kappa\rho_r a^2$) |
| $\rho_{\nu}$ | `grhonu_t` | Massive neutrino background density ($\kappa\rho_{\nu} a^2$) |
| $\rho_{de}$ | `grhov_t` | Dark energy background density ($\kappa\rho_{de} a^2$) |
| $H$ | `adotoa` | Hubble parameter (conformal time) |
**Note:** The `g` prefix in CAMB variables stands for “$\kappa$ times” where $\kappa = 8\pi G$, and densities are stored as $\kappa\rho a^2$ with units of $\text{Mpc}^{-2}$ for numerical convenience.
## Synchronous Gauge Details
**CDM Frame**
CAMB natively works in the CDM frame where:
* CDM velocity: $v_c = 0$
* Acceleration: $A = 0$
This is equivalent to the synchronous gauge with the gauge choice that the CDM is at rest.
**Synchronous Gauge Metric**
In synchronous gauge, the metric takes the form:
$$
ds^2 = a^2(\tau)[d\tau^2 - (\delta_{ij} + h_{ij})dx^i dx^j]
$$
where the metric perturbation $h_{ij}$ can be decomposed into scalar, vector, and tensor parts.
**CAMB’s Synchronous Gauge Variables:**
* $\eta_s$: Related to the trace of $h_{ij}$ (synchronous gauge curvature)
* $\dot{h}_s$: Time derivative of the trace
* **etak**: $k\eta_s$ (the variable actually stored in CAMB)
**Conversion Relations:**
From CAMB’s covariant variables to synchronous gauge:
$$
\eta_s = -\frac{\eta}{2K_{\mathrm{fac}}} \quad \text{where} \quad K_{\mathrm{fac}} = 1 - \frac{3K}{k^2}
$$
$$
\mathrm{etak} = k\eta_s = -\frac{k\eta}{2K_{\mathrm{fac}}}
$$
In the flat case ($K = 0$), this simplifies to:
$$
\mathrm{etak} = -\frac{k\eta}{2}
$$
In the CDM frame ($v_c = 0$, $A = 0$), the relationship of CAMB’s hdot to the synchronous
gauge variable is given by:
$$
\dot{h}_s = 6\dot{h} = 2kz
$$
where $z$ is the expansion rate perturbation.
## Gauge Transformation Examples
**Example 1: CDM Frame to Newtonian Gauge**
To transform from CDM frame to Newtonian gauge, apply:
* Set $\sigma = 0$ (zero shear condition)
* $\Phi_N = \phi + \frac{1}{2}\frac{a^2\kappa\Pi}{k^2}$
* $\Psi_N = \phi - \frac{1}{2}\frac{a^2\kappa\Pi}{k^2}$
**Example 2: Frame-Dependent Variables**
Some variables change under gauge transformations:
* Density perturbations: $\Delta_i \to \Delta_i + \frac{3H(1+w_i)\delta u}{k}$
* Velocities: $v_i \to v_i - \delta u$
* Heat flux: $q_i \to q_i - \frac{(\rho_i+p_i)\delta u}{\rho_i}$
where $\delta u$ is the frame transformation parameter.
## Practical Usage Notes
**For Transfer Functions:**
* All density variables $\Delta_i$ are in synchronous gauge
* Velocities depend on the specific context and gauge choice
* The Weyl potential is gauge-invariant
**For Custom Sources:**
* Use [`camb.symbolic.make_frame_invariant()`](symbolic.md#camb.symbolic.make_frame_invariant) to create gauge-invariant combinations
* The symbolic module provides automatic conversion between gauges
* See the ScalEqs notebook for practical examples
**Common Pitfalls:**
* Don’t mix variables from different gauges without proper transformation
* Remember that CAMB’s “synchronous gauge” is specifically the CDM frame
* Anisotropic stress components ($\pi_g$, $\pi_r$, $\pi_{\nu}$) and total anisotropic stress $\Pi$ are gauge-invariant
## Cross-References
* [Symbolic manipulation](symbolic.md) - Complete symbolic equation system and gauge transformations
* [ScalEqs notebook](https://camb.readthedocs.io/en/latest/ScalEqs.html) - Interactive examples with variable definitions
* [Matter power spectrum and matter transfer function variables](transfer_variables.md) - Transfer function variables and their meanings
* [Input parameter model](model.md) - Parameter and variable definitions for the Python interface
For the complete mathematical framework and equation derivations, see the technical notes referenced in the main documentation and the symbolic module documentation.
## https://camb.readthedocs.io/en/latest/fortran_compilers.html
# Fortran compilers
CAMB internally uses modern (object-oriented) Fortran 2008 for most numerical calculations (see [docs](https://camb.info/doc/)),
and needs a fortran compiler to build the numerical library. The recommended compilers are
- gfortran
- Intel Fortran (ifort), version 18.0.1 or higher
The gfortran compiler is part of the standard “gcc” compiler package, and may be pre-installed on recent unix systems.
Check the version using “gfortran –version”.
If you do not have a suitable Fortran compiler, you can get one as follows:
* **Mac:**
Download the [binary installation](https://gcc.gnu.org/wiki/GFortranBinaries)
* **Windows:**
Download gfortran as part of [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files) (select x86_64 option in the installation program)
or get latest from niXman on [GitHub](https://github.com/niXman/mingw-builds-binaries/releases) (e.g. x86_64-13.2.0-release-win32-seh-msvcrt-rt_v11-rev1)
* **Linux:**
To install from the standard repository use:
> - “sudo apt-get update; sudo apt-get install gfortran”
Alternatively you can compile and run in a container or virtual machine: e.g., see [CosmoBox](https://cosmologist.info/CosmoBox).
For example, to run a configured shell in docker where you can install and run camb from the command line (after changing to the camb directory):
```default
docker run -v /local/git/path/CAMB:/camb -i -t cmbant/cosmobox
```
## Updating modified Fortran code
In the main CAMB source root directory, to re-build the Fortran binary including any
pulled or local changes use:
```default
python setup.py make
```
This will also work on Windows as long as you have MinGW-w64 installed under Program Files as described above.
NOTE: gfortran occasionally produces [memory leaks](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120637): if you see
leaks running your code, try adding final methods to explicitly free any allocatable arrays or subcomponents.
Note that you will need to close all python instances using camb before you can re-load with an updated library.
This includes in Jupyter notebooks; just re-start the kernel or use:
```default
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
```
If you want to automatically rebuild the library from Jupyter you can do something like this:
```default
import subprocess
import sys
import os
src_dir = '/path/to/git/CAMB'
try:
subprocess.check_output(r'python "%s" make'%os.path.join(src_dir, 'setup.py'),
stderr=subprocess.STDOUT)
sys.path.insert(0,src_dir)
import camb
print('Using CAMB %s installed at %s'%(camb.__version__,
os.path.dirname(camb.__file__)))
except subprocess.CalledProcessError as E:
print(E.output.decode())
```
## https://camb.readthedocs.io/en/latest/mathutils.html
# Maths utils
This module contains some fast utility functions that are useful in the same contexts as camb. They are entirely
independent of the main camb code.
### camb.mathutils.chi_squared(covinv, x)
Utility function to efficiently calculate x^T covinv x
* **Parameters:**
* **covinv** – symmetric inverse covariance matrix
* **x** – vector
* **Returns:**
covinv.dot(x).dot(x), but parallelized and using symmetry
### camb.mathutils.pcl_coupling_matrix(P, lmax, pol=False)
Get Pseudo-Cl coupling matrix from power spectrum of mask.
Uses multiple threads. See Eq A31 of [astro-ph/0105302](https://arxiv.org/abs/astro-ph/0105302)
* **Parameters:**
* **P** – power spectrum of mask
* **lmax** – lmax for the matrix
* **pol** – whether to calculate TE, EE, BB couplings
* **Returns:**
coupling matrix (square but not symmetric), or list of TT, TE, EE, BB if pol
### camb.mathutils.scalar_coupling_matrix(P, lmax)
Get scalar Pseudo-Cl coupling matrix from power spectrum of mask, or array of power masks.
Uses multiple threads. See Eq A31 of [astro-ph/0105302](https://arxiv.org/abs/astro-ph/0105302)
* **Parameters:**
* **P** – power spectrum of mask, or list of mask power spectra
* **lmax** – lmax for the matrix (assumed square)
* **Returns:**
coupling matrix (square but not symmetric), or list of couplings for different masks
### camb.mathutils.threej(l2, l3, m2, m3)
Convenience wrapper around standard 3j function, returning array for all allowed l1 values
* **Parameters:**
* **l2** – L_2
* **l3** – L_3
* **m2** – M_2
* **m3** – M_3
* **Returns:**
array of 3j from max(abs(l2-l3),abs(m2+m3)) .. l2+l3
### camb.mathutils.threej_coupling(W, lmax, pol=False)
Calculate symmetric coupling matrix :math\`Xi\` for given weights $W_{\ell}$,
where $\langle\tilde{C}_\ell\rangle = \Xi_{\ell \ell'} (2\ell'+1) C_\ell$.
The weights are related to the power spectrum of the mask P
by $W_\ell = (2 \ell + 1) P_\ell / 4 \pi$.
See e.g. Eq D16 of [arxiv:0801.0554](http://arxiv.org/abs/0801.0554).
If pol is False and W is an array of weights, produces array of temperature couplings, otherwise for pol is True
produces set of TT, TE, EE, EB couplings (and weights must have one spectrum - for same masks - or three).
Use [`scalar_coupling_matrix()`](#camb.mathutils.scalar_coupling_matrix) or [`pcl_coupling_matrix()`](#camb.mathutils.pcl_coupling_matrix) to get the coupling matrix directly from the
mask power spectrum.
* **Parameters:**
* **W** – 1d array of Weights for each L, or list of arrays of weights (zero based)
* **lmax** – lmax for the output matrix (assumed symmetric, though not in principle)
* **pol** – if pol, produce TT, TE, EE, EB couplings for three input mask weights (or one if assuming same mask)
* **Returns:**
symmetric coupling matrix or array of matrices
### camb.mathutils.threej_pt(l1, l2, l3, m1, m2, m3)
Convenience testing function to get 3j for specific arguments.
Normally use threej to get an array at once for same cost.
* **Parameters:**
* **l1** – L_1
* **l2** – L_2
* **l3** – L_3
* **m1** – M_1
* **m2** – M_2
* **m3** – M_3
* **Returns:**
Wigner 3j (integer zero if outside triangle constraints)
## Usage examples from CAMBdemo jupyter notebook
**CAMB Python example notebook**
Run it online yourself in [Binder](https://mybinder.org/v2/gh/cmbant/camb/master?filepath=docs%2FCAMBdemo.ipynb).
```python
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import os
import numpy as np
from matplotlib import pyplot as plt
import camb
from camb import initialpower, model
print("Using CAMB %s installed at %s" % (camb.__version__, os.path.dirname(camb.__file__)))
# make sure the version and path is what you expect
```
```python
# Set up a new set of parameters for CAMB
# The defaults give one massive neutrino and helium set using BBN consistency
pars = camb.set_params(
H0=67.5, ombh2=0.022, omch2=0.122, mnu=0.06, omk=0, tau=0.06, As=2e-9, ns=0.965, halofit_version="mead", lmax=3000
)
```
```python
# calculate results for these parameters
results = camb.get_results(pars)
```
```python
# get dictionary of CAMB power spectra
powers = results.get_cmb_power_spectra(pars, CMB_unit="muK")
for name in powers:
print(name)
```
```python
# plot the total lensed CMB power spectra versus unlensed, and fractional difference
totCL = powers["total"]
unlensedCL = powers["unlensed_scalar"]
print(totCL.shape)
# Python CL arrays are all zero based (starting at L=0), Note L=0,1 entries will be zero by default.
# The different CL are always in the order TT, EE, BB, TE (with BB=0 for unlensed scalar results).
ls = np.arange(totCL.shape[0])
fig, ax = plt.subplots(2, 2, figsize=(12, 12))
ax[0, 0].plot(ls, totCL[:, 0], color="k")
ax[0, 0].plot(ls, unlensedCL[:, 0], color="C2")
ax[0, 0].set_title(r"$TT\, [\mu K^2]$")
ax[0, 1].plot(ls[2:], 1 - unlensedCL[2:, 0] / totCL[2:, 0])
ax[0, 1].set_title(r"Fractional TT lensing")
ax[1, 0].plot(ls, totCL[:, 1], color="k")
ax[1, 0].plot(ls, unlensedCL[:, 1], color="C2")
ax[1, 0].set_title(r"$EE\, [\mu K^2]$")
ax[1, 1].plot(ls, totCL[:, 3], color="k")
ax[1, 1].plot(ls, unlensedCL[:, 3], color="C2")
ax[1, 1].set_title(r"$TE\, [\mu K^2]$")
for ax in ax.reshape(-1):
ax.set_xlim([2, 3000])
ax.set_xlabel(r"$\ell$")
```
```python
# The lensing B modes are non-linear, so need to be calculated carefully if you want them accurate (even at low ell)
# Need both high lmax, non-linear lensing and high k
# lens_potential_accuracy=1 turns on the latter, and can be increased to check precision
pars.set_for_lmax(2500, lens_potential_accuracy=1)
results = camb.get_results(pars)
lmax2500CL = results.get_lensed_scalar_cls(CMB_unit="muK")
ls = np.arange(lmax2500CL.shape[0])
pars.set_for_lmax(4000, lens_potential_accuracy=1)
results = camb.get_results(pars)
lmax4000CL = results.get_lensed_scalar_cls(CMB_unit="muK")
pars.set_for_lmax(4000, lens_potential_accuracy=2)
results = camb.get_results(pars)
accCL = results.get_lensed_scalar_cls(CMB_unit="muK")
pars.set_for_lmax(6000, lens_potential_accuracy=4)
results = camb.get_results(pars)
refCL = results.get_lensed_scalar_cls(CMB_unit="muK")
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
ax[0].plot(ls, totCL[: len(ls), 2], color="C0")
ax[0].plot(ls, lmax2500CL[: len(ls), 2], color="C1")
ax[0].plot(ls, lmax4000CL[: len(ls), 2], color="C2")
ax[0].plot(ls, accCL[: len(ls), 2], color="C3")
ax[0].plot(ls, refCL[: len(ls), 2], color="k")
ax[0].set_xlim([2, 2500])
ax[0].set_xlabel(r"$\ell$", fontsize=13)
ax[0].set_ylabel(r"$\ell(\ell+1)C_\ell^{BB}/2\pi\,[\mu {\rm K}^2]$", fontsize=13)
ax[1].plot(ls[2:], totCL[2 : len(ls), 2] / refCL[2 : len(ls), 2] - 1, color="C0")
ax[1].plot(ls[2:], lmax2500CL[2 : len(ls), 2] / refCL[2 : len(ls), 2] - 1, color="C1")
ax[1].plot(ls[2:], lmax4000CL[2 : len(ls), 2] / refCL[2 : len(ls), 2] - 1, color="C2")
ax[1].plot(ls[2:], accCL[2 : len(ls), 2] / refCL[2 : len(ls), 2] - 1, color="C3")
ax[1].axhline(0, color="k")
ax[1].set_xlim([2, 2500])
ax[1].set_xlabel(r"$\ell$", fontsize=13)
ax[1].set_ylabel("fractional error", fontsize=13)
ax[1].legend(
[
"Default accuracy",
"lmax=2500, lens_potential_accuracy=1",
"lmax=4000, lens_potential_accuracy=1",
"lmax=4000, lens_potential_accuracy=2",
]
);
```
```python
# You can calculate spectra for different primordial power spectra without recalculating everything
# for example, let's plot the BB spectra as a function of r
pars.set_for_lmax(4000, lens_potential_accuracy=1)
pars.WantTensors = True
results = camb.get_transfer_functions(pars)
lmax = 2000
rs = np.linspace(0, 0.2, 6)
for r in rs:
inflation_params = initialpower.InitialPowerLaw()
inflation_params.set_params(ns=0.96, r=r)
results.power_spectra_from_transfer(inflation_params) # warning OK here, not changing scalars
cl = results.get_total_cls(lmax, CMB_unit="muK")
plt.loglog(np.arange(lmax + 1), cl[:, 2])
plt.xlim([2, lmax])
plt.legend(["$r = %s$" % r for r in rs], loc="lower right")
plt.ylabel(r"$\ell(\ell+1)C_\ell^{BB}/ (2\pi \mu{\rm K}^2)$")
plt.xlabel(r"$\ell$");
```
```python
# Now get matter power spectra and sigma8 at redshift 0 and 0.8
# parameters can all be passed as a dict as above, or you can call
# separate functions to set up the parameter object
pars = camb.set_params(H0=67.5, ombh2=0.022, omch2=0.122, ns=0.965)
# Note non-linear corrections couples to smaller scales than you want
pars.set_matter_power(redshifts=[0.0, 0.8], kmax=2.0)
# Linear spectra
pars.NonLinear = model.NonLinear_none
results = camb.get_results(pars)
kh, z, pk = results.get_matter_power_spectrum(minkh=1e-4, maxkh=1, npoints=200)
s8 = np.array(results.get_sigma8())
# Non-Linear spectra (Halofit)
pars.NonLinear = model.NonLinear_both
results.calc_power_spectra(pars)
kh_nonlin, z_nonlin, pk_nonlin = results.get_matter_power_spectrum(minkh=1e-4, maxkh=1, npoints=200)
print("sigma 8 values at the two redshifts:", results.get_sigma8())
```
```python
for i, (redshift, line) in enumerate(zip(z, ["-", "--"])):
plt.loglog(kh, pk[i, :], color="k", ls=line)
plt.loglog(kh_nonlin, pk_nonlin[i, :], color="r", ls=line)
plt.xlabel("k/h Mpc")
plt.legend(["linear", "non-linear"], loc="lower left")
plt.title("Matter power at z=%s and z= %s" % tuple(z));
```
```python
# If you want to use sigma8 (redshift zero) as an input parameter, have to scale the input primordial amplitude As:
As = 2e-9 # fiducial amplitude guess to start with
pars = camb.set_params(H0=67.5, ombh2=0.022, omch2=0.122, ns=0.965, As=As)
pars.set_matter_power(redshifts=[0.0], kmax=2.0)
results = camb.get_results(pars)
s8_fid = results.get_sigma8_0()
# now set correct As using As \propto sigma8**2.
sigma8 = 0.81 # value we want
pars.InitPower.set_params(As=As * sigma8**2 / s8_fid**2, ns=0.965)
# check result
results = camb.get_results(pars)
print(sigma8, results.get_sigma8_0())
```
```python
# Plot CMB lensing potential power for various values of w at fixed H0
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.InitPower.set_params(As=2e-9, ns=0.965)
pars.set_for_lmax(2000, lens_potential_accuracy=1)
ws = np.linspace(-1.5, -0.6, 5)
for w in ws:
pars.set_dark_energy(w=w, wa=0, dark_energy_model="fluid")
results = camb.get_results(pars)
cl = results.get_lens_potential_cls(lmax=2000)
plt.loglog(np.arange(2001), cl[:, 0])
plt.legend([f"$w = {w:.3f}$" for w in ws])
plt.ylabel(r"$[L(L+1)]^2C_L^{\phi\phi}/2\pi$")
plt.xlabel("$L$")
plt.xlim([2, 2000])
```
```python
# Same for varying w at fixed thetastar rather than fixed H0
# When using thetastar you must instead call set_cosmology() *after* setting the dark energy parameters
# or use camb.set_params to set everything at once as in this example
ws = np.linspace(-1.5, -0.6, 5)
for w in ws:
pars = camb.set_params(
w=w,
wa=0,
dark_energy_model="fluid",
thetastar=0.0104,
ombh2=0.022,
omch2=0.12,
As=2e-9,
ns=0.96,
lmax=2000,
lens_potential_accuracy=2,
)
results = camb.get_results(pars)
cl = results.get_lens_potential_cls(lmax=2000)
plt.loglog(np.arange(2001), cl[:, 0])
plt.legend([f"$w = {w:.3f}$" for w in ws])
plt.ylabel(r"$[L(L+1)]^2C_L^{\phi\phi}/2\pi$")
plt.xlabel("$L$")
plt.xlim([2, 2000])
```
---
You can view the parameters (as used by fortran CAMB internals) by just printing the parameter object.
See the [docs](https://camb.readthedocs.io/en/latest/model.html) for parameter and structure descriptions
```python
# parameters can also be read from text .ini files, for example this sets up a best-fit
# Planck 2018 LCDM cosmology (base_plikHM_TTTEEE_lowl_lowE_lensing).
# [Use planck_2018_acc.ini if you need high-ell and/or accurate BB and CMB lensng spectra at beyond-Planck accuracy]
pars = camb.read_ini("https://raw.githubusercontent.com/cmbant/CAMB/master/inifiles/planck_2018.ini")
# for a local github installation you can just do
# pars=camb.read_ini(os.path.join(camb_path,'inifiles','planck_2018.ini'))
# view parameter objects using print(), or use pickle or repr to save and restore
print(pars)
```
```python
# The dark energy model can be changed as in the previous example, or by assigning to pars.DarkEnergy.
# ** Note that if using thetastar as a parameter, you *must* set dark energy before calling set_cosmology
# or use the camb.set_params() function setting everything at once from a dict **
# e.g. use the PPF model
from camb.dark_energy import DarkEnergyPPF
pars.DarkEnergy = DarkEnergyPPF(w=-1.2, wa=0.2)
print("w, wa model parameters:\n\n", pars.DarkEnergy)
results = camb.get_background(pars)
# or can also use a w(a) numerical function
# (note this will slow things down; make your own dark energy class in fortran for best performance)
a = np.logspace(-5, 0, 1000)
w = -1.2 + 0.2 * (1 - a)
pars.DarkEnergy = DarkEnergyPPF()
pars.DarkEnergy.set_w_a_table(a, w)
print("Table-interpolated parameters (w and wa are set to estimated values at 0):\n\n", pars.DarkEnergy)
results2 = camb.get_background(pars)
rho, _ = results.get_dark_energy_rho_w(a)
rho2, _ = results2.get_dark_energy_rho_w(a)
plt.plot(a, rho, color="k")
plt.plot(a, rho2, color="r", ls="--")
plt.ylabel(r"$\rho/\rho_0$")
plt.xlabel("$a$")
plt.xlim(0, 1)
plt.title("Dark enery density");
```
```python
# Get various background functions and derived parameters
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
results = camb.get_background(pars)
print("Derived parameter dictionary: %s" % results.get_derived_params())
```
```python
z = np.linspace(0, 4, 100)
DA = results.angular_diameter_distance(z)
plt.plot(z, DA)
plt.xlabel("$z$")
plt.ylabel(r"$D_A /\rm{Mpc}$")
plt.title("Angular diameter distance")
plt.ylim([0, 2000])
plt.xlim([0, 4]);
```
```python
print("CosmoMC theta_MC parameter: %s" % results.cosmomc_theta())
```
```python
# You can also directly access some lower level quantities, for example the CMB transfer functions:
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
data = camb.get_transfer_functions(pars)
transfer = data.get_cmb_transfer_data()
print("Number of sources (T, E, phi..): %s; number of ell: %s; number of k: %s " % tuple(transfer.delta_p_l_k.shape))
```
```python
# Plot the transfer functions as a function of k for various ell
fig, axs = plt.subplots(2, 2, figsize=(12, 8), sharex=True)
for ix, ax in zip([3, 20, 40, 60], axs.reshape(-1)):
ax.plot(transfer.q, transfer.delta_p_l_k[0, ix, :])
ax.set_title(r"$\ell = %s$" % transfer.L[ix])
if ix > 1:
ax.set_xlabel(r"$k \rm{Mpc}$")
```
```python
# Note that internal samplings can be quite sparsely sampled, e.g. look at l=2 transfer function
def plot_cmb_transfer_l(trans, ix):
_, axs = plt.subplots(1, 2, figsize=(12, 6))
for source_ix, (name, ax) in enumerate(zip(["T", "E"], axs)):
ax.semilogx(trans.q, trans.delta_p_l_k[source_ix, ix, :])
ax.set_xlim([1e-5, 0.05])
ax.set_xlabel(r"$k \rm{Mpc}$")
ax.set_title(r"%s transfer function for $\ell = %s$" % (name, trans.L[ix]))
plot_cmb_transfer_l(transfer, 0)
```
```python
# So if you want to make nice plots, either interpolate or do things at higher than default accuracy
pars.set_accuracy(AccuracyBoost=2) # higher accuracy, so higher sampling density
data = camb.get_transfer_functions(pars)
plot_cmb_transfer_l(data.get_cmb_transfer_data(), 0)
pars.set_accuracy(AccuracyBoost=1); # re-set default
```
```python
# Similarly for tensor transfer function
# e.g. see where various C_L^BB come from in k by plotting normalized transfer**2
# (C_l is ~ integral over log k P(k) T(k)^2)
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.WantScalars = False
pars.WantTensors = True
pars.set_accuracy(AccuracyBoost=2)
data = camb.get_transfer_functions(pars)
transfer = data.get_cmb_transfer_data("tensor")
print(r"Calculated L: %s" % transfer.L)
plt.figure(figsize=(14, 3))
ixs = [13, 19, 21]
ls = [transfer.L[i] for i in ixs]
cols = ["b", "r", "c"]
for ix, col in zip(ixs, cols):
k_weight = transfer.delta_p_l_k[2, ix, :] ** 2
k_weight /= np.sum(k_weight)
plt.semilogx(transfer.q, k_weight, color=col)
plt.xlim([1e-3, 0.1])
plt.legend(ls)
plt.xlabel(r"$k \rm{Mpc}$")
plt.title(r"Contribution to B from primordial tensor power spectrum for various $\ell$")
# compare with k_* = l/chi*, note DAstar is in GPc, so multiply by 1000 to get standard Mpc units used for k
derived = data.get_derived_params()
for ell, col in zip(ls, cols):
plt.axvline(ell / (1000 * derived["DAstar"]), color=col, ls=":", lw=2)
```
```python
# if you want to combine the transfer functions with the primordial power spectra, you can get the latter via
k = 10 ** np.linspace(-5, 1, 50)
pars.InitPower.set_params(ns=0.96, r=0.2) # this functions imposes inflation consistency relation by default
scalar_pk = pars.scalar_power(k)
tensor_pk = pars.tensor_power(k)
plt.semilogx(k, scalar_pk)
plt.semilogx(k, tensor_pk)
plt.xlabel(r"$k \rm{Mpc}$")
plt.ylabel(r"${\cal P}(k)$")
plt.legend(["scalar", "tensor"]);
```
```python
# set_params is a shortcut routine for setting many things at once
pars = camb.set_params(H0=67.5, ombh2=0.022, omch2=0.122, As=2e-9, ns=0.95)
data = camb.get_background(pars)
```
```python
# There are functions get plot evolution of variables, e.g. for the background as a function of conformal time:
# (there is an example changing the reionization history later)
eta = 10 ** (np.linspace(1, 4, 300))
back_ev = data.get_background_time_evolution(eta, ["x_e", "visibility"])
fig, axs = plt.subplots(1, 2, figsize=(12, 5))
axs[0].semilogx(eta, back_ev["x_e"])
axs[1].loglog(eta, back_ev["visibility"])
axs[0].set_xlabel(r"$\eta/\rm{Mpc}$")
axs[0].set_ylabel("$x_e$")
axs[1].set_xlabel(r"$\eta/\rm{Mpc}$")
axs[1].set_ylabel("Visibility")
fig.suptitle("Ionization history, including both hydrogen and helium recombination and reionization");
```
```python
# or as a function of redshift
z = 10 ** np.linspace(2, 4, 300)
back_ev = data.get_background_redshift_evolution(z, ["x_e", "visibility"], format="array")
fig, axs = plt.subplots(1, 2, figsize=(12, 5))
for (
i,
(ax, label),
) in enumerate(zip(axs, ["$x_e$", "Visibility"])):
ax.semilogx(z, back_ev[:, i])
ax.set_xlabel("$z$")
ax.set_ylabel(label)
ax.set_xlim([500, 1e4])
```
```python
# ..and perturbation transfer functions, e.g. for k=0.1.
# Note that quantities are synchronous gauge unless otherwise specified
# Also note v_newtonian_cdm, v_newtonian_baryon include a factor of -k/H, and Weyl a factor of k^2 -
# see https://camb.readthedocs.io/en/latest/transfer_variables.html
print("Available variables are %s" % camb.model.evolve_names)
```
```python
eta = np.linspace(1, 400, 300)
ks = [0.02, 0.1]
ev = data.get_time_evolution(ks, eta, ["delta_baryon", "delta_photon"])
_, axs = plt.subplots(1, 2, figsize=(12, 5))
for i, ax in enumerate(axs):
ax.plot(eta, ev[i, :, 0])
ax.plot(eta, ev[i, :, 1])
ax.set_title("$k= %s$" % ks[i])
ax.set_xlabel(r"$\eta/\rm{Mpc}$")
plt.legend([r"$\Delta_b$", r"$\Delta_\gamma$"], loc="upper left")
```
```python
# or as a function of redshift
z = np.linspace(500, 5000, 300)
ks = [0.02, 0.1]
ev = data.get_redshift_evolution(ks, z, ["delta_baryon", "delta_cdm", "delta_photon"])
_, axs = plt.subplots(1, 2, figsize=(12, 5))
for i, ax in enumerate(axs):
ax.plot(z, ev[i, :, 0])
ax.plot(z, ev[i, :, 1])
ax.plot(z, ev[i, :, 2])
ax.set_title(r"$k= %s/\rm{Mpc}$" % ks[i])
ax.set_xlabel("$z$")
plt.legend([r"$\Delta_b$", r"$\Delta_c$", r"$\Delta_\gamma$"], loc="upper right");
```
```python
# Here you can see oscillation of delta_photon, subsequent decay of the potential and change to Mezsaroz growth in delta_cdm
eta = 10 ** (np.linspace(0, 3, 500))
def plot_ev(ev, k):
plt.figure(figsize=(8, 6))
plt.loglog(eta, ev[:, 0])
plt.loglog(eta, np.abs(ev[:, 1]))
plt.loglog(eta, -ev[:, 2] / k**2) # Weyl is k^2*phi
plt.title(r"$k= %s/\rm{Mpc}$" % k)
plt.xlabel(r"$\eta/\rm{Mpc}$")
plt.legend([r"$\Delta_c$", r"$|\Delta_\gamma|$", r"$-(\Phi+\Psi)/2$"], loc="upper left")
k = 0.3
plot_ev(data.get_time_evolution(k, eta, ["delta_cdm", "delta_photon", "Weyl"]), k)
```
```python
# Note that time evolution can be visually quite sensitive to accuracy.
# By default it is boosted, but you can change this. e.g.
plot_ev(data.get_time_evolution(k, eta, ["delta_cdm", "delta_photon", "Weyl"], lAccuracyBoost=1), k)
plot_ev(data.get_time_evolution(k, eta, ["delta_cdm", "delta_photon", "Weyl"], lAccuracyBoost=10), k)
```
```python
# It's also possible to plot quantities in other gauges, or arbitrary symbolic expressions,
# using camb.symbolic.
# For example, this plots the Newtonian gauge density compared to the synchronous gauge one
import camb.symbolic as cs
Delta_c_N = cs.make_frame_invariant(cs.Delta_c, "Newtonian")
ev = data.get_time_evolution(k, eta, ["delta_cdm", Delta_c_N])
plt.figure(figsize=(6, 4))
plt.loglog(eta, ev[:, 0])
plt.loglog(eta, ev[:, 1])
plt.title(r"$k= %s/\rm{Mpc}$" % k)
plt.xlabel(r"$\eta/\rm{Mpc}$")
plt.legend([r"$\Delta_c^{\rm synchronous}$", r"$\Delta_c^{\rm Newtonian}$"], fontsize=14);
```
```python
# Or see that the Newtonian-gauge CDM peculiar velocity decays roughly propto 1/a on sub-horizon
# scales during radiation domination after the potentials have decayed so there is no driving force
# (leading to logarithmic Meszaros growth of the CDM density perturbation)
k = 4
vc_N = cs.make_frame_invariant(cs.v_c, "Newtonian")
ev = data.get_time_evolution(k, eta, [Delta_c_N, vc_N, "a"])
plt.figure(figsize=(6, 4))
plt.loglog(eta, ev[:, 0])
plt.loglog(eta, -ev[:, 1])
eta_ln = 6 * np.sqrt(3) * np.pi / 4 / k
horizon_index = np.searchsorted(eta, eta_ln, side="right")
plt.loglog(eta, -ev[horizon_index, 1] * ev[horizon_index, 2] / ev[:, 2], ls="--")
plt.ylim([1e-4, 5e2])
plt.title(r"$k= %s/\rm{Mpc}$" % k)
plt.xlabel(r"$\eta/\rm{Mpc}$")
plt.legend(
[r"$\Delta_c^{\rm Newtonian}$", r"$-v_c^{\rm Newtonian}$", r"$v_c^{\rm Newtonian}\propto 1/a$"], fontsize=14
);
```
For further details of camb.symbolic and examples of what can be done see the [CAMB symbolic ScalEqs notebook](https://camb.readthedocs.io/en/latest/ScalEqs.html) (run now in [Binder](https://mybinder.org/v2/gh/cmbant/camb/master?filepath=docs%2FScalEqs.ipynb)). This also serves as documentation for the scalar equations implemented in CAMB.
```python
# For calculating large-scale structure and lensing results yourself, get a power spectrum
# interpolation object. In this example we calculate the CMB lensing potential power
# spectrum using the Limber approximation, using PK=camb.get_matter_power_interpolator() function.
# calling PK(z, k) will then get power spectrum at any k and redshift z in range.
nz = 100 # number of steps to use for the radial/redshift integration
kmax = 10 # kmax to use
# First set up parameters as usual
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.InitPower.set_params(ns=0.965)
# For Limber result, want integration over \chi (comoving radial distance), from 0 to chi_*.
# so get background results to find chistar, set up a range in chi, and calculate corresponding redshifts
results = camb.get_background(pars)
chistar = results.conformal_time(0) - results.tau_maxvis
chis = np.linspace(0, chistar, nz)
zs = results.redshift_at_comoving_radial_distance(chis)
# Calculate array of delta_chi, and drop first and last points where things go singular
dchis = (chis[2:] - chis[:-2]) / 2
chis = chis[1:-1]
zs = zs[1:-1]
# Get the matter power spectrum interpolation object (based on RectBivariateSpline).
# Here for lensing we want the power spectrum of the Weyl potential.
PK = camb.get_matter_power_interpolator(
pars,
nonlinear=True,
hubble_units=False,
k_hunit=False,
kmax=kmax,
var1=model.Transfer_Weyl,
var2=model.Transfer_Weyl,
zmax=zs[-1],
)
# Have a look at interpolated power spectrum results for a range of redshifts
# Expect linear potentials to decay a bit when Lambda becomes important, and change from non-linear growth
plt.figure(figsize=(8, 5))
k = np.exp(np.log(10) * np.linspace(-4, 2, 200))
zplot = [0, 0.5, 1, 4, 20]
for z in zplot:
plt.loglog(k, PK.P(z, k))
plt.xlim([1e-4, kmax])
plt.xlabel(r"k Mpc")
plt.ylabel(r"$P_\Psi\, Mpc^{-3}$")
plt.legend(["z=%s" % z for z in zplot])
```
Now do integral to get convergence power spectrum, using Limber approximation ($k\approx (l+0.5)/\chi$)
$$
C_l^\kappa \approx [l(l+1)]^2\int_0^{\chi_*} d\chi \left( \frac{\chi_*-\chi}{\chi^2\chi_*}\right)^2 P_\Psi\left(\frac{l+0.5}{\chi}, z(\chi)\right)
$$
where $P_\Psi $ is obtained from the interpolator.
```python
# Get lensing window function (flat universe)
win = ((chistar - chis) / (chis**2 * chistar)) ** 2
# Do integral over chi
ls = np.arange(2, 2500 + 1, dtype=np.float64)
cl_kappa = np.zeros(ls.shape)
w = np.ones(chis.shape) # this is just used to set to zero k values out of range of interpolation
for i, ell in enumerate(ls):
k = (ell + 0.5) / chis
w[:] = 1
w[k < 1e-4] = 0
w[k >= kmax] = 0
cl_kappa[i] = np.dot(dchis, w * PK.P(zs, k, grid=False) * win / k**4)
cl_kappa *= (ls * (ls + 1)) ** 2
```
```python
# Compare with CAMB's calculation:
# note that to get CAMB's internal calculation accurate at the 1% level at L~2000,
# need lens_potential_accuracy=2. Increase to 4 for accurate match to the Limber calculation here
pars.set_for_lmax(2500, lens_potential_accuracy=2)
results = camb.get_results(pars)
cl_camb = results.get_lens_potential_cls(2500)
# cl_camb[:,0] is phi x phi power spectrum (other columns are phi x T and phi x E)
# Make plot. Expect difference at very low-L from inaccuracy in Limber approximation, and
# very high L from differences in kmax (lens_potential_accuracy is only 2, though good by eye here)
cl_limber = 4 * cl_kappa / 2 / np.pi # convert kappa power to [l(l+1)]^2C_phi/2pi (what cl_camb is)
plt.loglog(ls, cl_limber, color="b")
plt.loglog(np.arange(2, cl_camb[:, 0].size), cl_camb[2:, 0], color="r")
plt.xlim([1, 2000])
plt.legend(["Limber", "CAMB hybrid"])
plt.ylabel(r"$[L(L+1)]^2C_L^{\phi}/2\pi$")
plt.xlabel("$L$")
```
```python
# The non-linear model can be changed like this:
pars.set_matter_power(redshifts=[0.0], kmax=2.0)
pars.NonLinearModel.set_params(halofit_version="takahashi")
kh_nonlin, _, pk_takahashi = results.get_nonlinear_matter_power_spectrum(params=pars)
pars.NonLinearModel.set_params(halofit_version="mead")
kh_nonlin, _, pk_mead = results.get_nonlinear_matter_power_spectrum(params=pars)
fig, axs = plt.subplots(2, 1, sharex=True, figsize=(8, 8))
axs[0].loglog(kh_nonlin, pk_takahashi[0])
axs[0].loglog(kh_nonlin, pk_mead[0])
axs[1].semilogx(kh_nonlin, pk_mead[0] / pk_takahashi[0] - 1)
axs[1].set_xlabel(r"$k/h\, \rm{Mpc}$")
axs[1].legend(["Mead/Takahashi-1"], loc="upper left");
```
```python
# Get angular power spectrum for galaxy number counts and lensing
from camb.sources import GaussianSourceWindow, SplinedSourceWindow
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.InitPower.set_params(As=2e-9, ns=0.965)
pars.set_for_lmax(lmax, lens_potential_accuracy=1)
# set Want_CMB to true if you also want CMB spectra or correlations
pars.Want_CMB = False
# NonLinear_both or NonLinear_lens will use non-linear corrections
pars.NonLinear = model.NonLinear_both
# Set up W(z) window functions, later labelled W1, W2. Gaussian here.
pars.SourceWindows = [
GaussianSourceWindow(redshift=0.17, source_type="counts", bias=1.2, sigma=0.04, dlog10Ndm=-0.2),
GaussianSourceWindow(redshift=0.5, source_type="lensing", sigma=0.07),
]
results = camb.get_results(pars)
cls = results.get_source_cls_dict()
# Note that P is CMB lensing, as a deflection angle power (i.e. PxP is [l(l+1)]^2C_l\phi\phi/2\pi)
# lensing window functions are for kappa (and counts for the fractional angular number density)
ls = np.arange(2, lmax + 1)
for spectrum in ["W1xW1", "W2xW2", "W1xW2", "PxW1", "PxW2"]:
plt.loglog(ls, cls[spectrum][2 : lmax + 1], label=spectrum)
plt.xlabel(r"$\ell$")
plt.ylabel(r"$\ell(\ell+1)C_\ell/2\pi$")
plt.legend()
```
```python
# You can also use window functions from numerical table of W(z). It must be well enough sampled to interpolate nicely.
# e.g. reproduce Gaussian this way..
zs = np.arange(0, 0.5, 0.02)
W = np.exp(-((zs - 0.17) ** 2) / 2 / 0.04**2) / np.sqrt(2 * np.pi) / 0.04
pars.SourceWindows = [SplinedSourceWindow(bias=1.2, dlog10Ndm=-0.2, z=zs, W=W)]
results = camb.get_results(pars)
cls2 = results.get_cmb_unlensed_scalar_array_dict()
plt.loglog(ls, cls["W1xW1"][2 : lmax + 1], label=spectrum)
plt.loglog(ls, cls2["W1xW1"][2 : lmax + 1], ls="--")
plt.xlabel(r"$\ell$")
plt.ylabel(r"$\ell(\ell+1)C_\ell/2\pi$")
```
```python
# Sources can include various terms using these options (line_xx refers to 21cm)
print(pars.SourceTerms)
```
```python
# Results above include redshift distortions but not magnification bias (counts_lensing).
# Try turning off redshift distortions:
pars.SourceTerms.counts_redshift = False
results = camb.get_results(pars)
cls3 = results.get_source_cls_dict()
plt.loglog(ls, cls["W1xW1"][2 : lmax + 1])
plt.loglog(ls, cls3["W1xW1"][2 : lmax + 1])
plt.legend(["With redshift distortions", "Without"])
plt.xlabel(r"$\ell$")
plt.ylabel(r"$\ell(\ell+1)C_\ell/2\pi$")
plt.xlim(2, lmax);
```
```python
# For number counts you can give a redshift-dependent bias (the underlying code supports general b(z,k))
# toy model for single-bin LSST/Vera Rubin [using numbers from 1705.02332]
z0 = 0.311
zs = np.arange(0, 10, 0.02)
W = np.exp(-zs / z0) * (zs / z0) ** 2 / 2 / z0
bias = 1 + 0.84 * zs
pars.SourceWindows = [SplinedSourceWindow(dlog10Ndm=0, z=zs, W=W, bias_z=bias)]
lmax = 3000
pars.set_for_lmax(lmax, lens_potential_accuracy=5)
results = camb.get_results(pars)
# e.g. plot the cross-correlation with CMB lensing
cls = results.get_cmb_unlensed_scalar_array_dict()
nbar = 40 / (np.pi / 180 / 60) ** 2 # Poission noise
ls = np.arange(2, lmax + 1)
Dnoise = 1 / nbar * ls * (ls + 1) / 2 / np.pi
correlation = cls["PxW1"][2 : lmax + 1] / np.sqrt(cls["PxP"][2 : lmax + 1] * (cls["W1xW1"][2 : lmax + 1] + Dnoise))
plt.plot(np.arange(2, lmax + 1), correlation)
plt.xlabel(r"$L$")
plt.ylabel("correlation")
plt.xlim(2, lmax)
plt.title("CMB lensing - LSST correlation (single redshift bin)");
```
```python
# Let's look at some non-standard primordial power spectrum, e.g. with wavepacket oscillation
# Define our custom power spectrum function (here power law with one wavepacket)
def PK(k, As, ns, amp, freq, wid, centre, phase):
return As * (k / 0.05) ** (ns - 1) * (1 + np.sin(phase + k * freq) * amp * np.exp(-((k - centre) ** 2) / wid**2))
# Check how this looks compared to power law
ks = np.linspace(0.02, 1, 1000)
pk1 = 2e-9 * (ks / 0.05) ** (0.96 - 1)
pk2 = PK(ks, 2e-9, 0.96, 0.0599, 280, 0.08, 0.2, 0)
plt.semilogx(ks, pk1)
plt.semilogx(ks, pk2)
plt.ylabel("$P(k)$")
plt.xlabel(r"$k\, {\rm Mpc}$")
plt.legend(["Power law", "Custom"])
plt.title("Scalar initial power spectrum");
```
```python
# Now compute C_l and compare
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122, tau=0.06)
lmax = 2500
pars.set_for_lmax(lmax, lens_potential_accuracy=1)
# For comparison, standard power law
pars.InitPower.set_params(As=2e-9, ns=0.96)
results = camb.get_results(pars)
cl_unlensed = results.get_unlensed_scalar_cls(CMB_unit="muK")
cl = results.get_lensed_scalar_cls(CMB_unit="muK")
# Now get custom spectrum (effective_ns_for_nonlinear is used for halofit if required)
pars.set_initial_power_function(PK, args=(2e-9, 0.96, 0.0599, 280, 0.08, 0.2, 0), effective_ns_for_nonlinear=0.96)
results2 = camb.get_results(pars)
cl2 = results2.get_lensed_scalar_cls(CMB_unit="muK")
ls = np.arange(2, lmax)
plt.plot(ls, (cl2[2:lmax, 0] - cl[2:lmax, 0]))
plt.xlabel(r"$\ell$")
plt.ylabel(r"$\ell(\ell+1)\Delta C_\ell/2\pi\, [\mu K^2]$")
plt.title(r"$C_\ell$ difference to power law")
```
```python
# Note that if you have sharp features or fine oscillations, you may need
# increase accuracy to sample them well. e.g. let's try increasing the frequency
# Default accuracy
pars.Accuracy.lSampleBoost = 1
pars.Accuracy.IntkAccuracyBoost = 1
pars.Accuracy.SourcekAccuracyBoost = 1
freq = 1000
ks = np.linspace(0.02, 1, 1000)
pk1 = 2e-9 * (ks / 0.05) ** (0.96 - 1)
pk2 = PK(ks, 2e-9, 0.96, 0.0599, freq, 0.08, 0.2, 0)
plt.semilogx(ks, pk1)
plt.semilogx(ks, pk2)
plt.ylabel("$P(k)$")
plt.xlabel(r"$k\, {\rm Mpc}$")
plt.title("Scalar power spectrum")
plt.figure()
pars.set_initial_power_function(PK, args=(2e-9, 0.96, 0.0599, freq, 0.08, 0.2, 0), effective_ns_for_nonlinear=0.96)
results2 = camb.get_results(pars)
cl_unlensed2 = results2.get_unlensed_scalar_cls(CMB_unit="muK")
# need to increase default sampling in ell to see features smaller than peaks reliably
pars.Accuracy.lSampleBoost = 2
# may also need to sample k more densely when computing C_l from P(k)
pars.Accuracy.IntkAccuracyBoost = 2
results3 = camb.get_results(pars)
cl_unlensed3 = results3.get_unlensed_scalar_cls(CMB_unit="muK")
cl3 = results3.get_lensed_scalar_cls(CMB_unit="muK")
ls = np.arange(2, lmax)
plt.plot(ls, (cl_unlensed2[2:lmax, 0] / cl_unlensed[2:lmax, 0] - 1))
plt.plot(ls, (cl_unlensed3[2:lmax, 0] / cl_unlensed[2:lmax, 0] - 1))
plt.xlabel(r"$\ell$")
plt.ylabel(r"$\Delta C_\ell/C_\ell$")
plt.title(r"Fractional $C_\ell$ difference to power law")
plt.legend(["Default accuracy", "Boosted accuracy"])
```
```python
# Note that lensing washes out small features on small scales
plt.plot(ls, (cl_unlensed3[2:lmax, 0] / cl_unlensed[2:lmax, 0] - 1))
plt.plot(ls, (cl3[2:lmax, 0] / cl[2:lmax, 0] - 1))
plt.xlabel(r"$\ell$")
plt.ylabel(r"$\Delta C_\ell/C_\ell$")
plt.legend(["Unlensed", "Lensed"])
plt.title(r"Fractional $C_\ell$ difference to power law");
```
```python
# Now look at the (small!) effect of neutrino mass splittings on the matter power spectrum (in linear theory)
# The "neutrino_hierarchy" parameter uses a two eigenstate approximation to the full hierarchy (which is very accurate for cosmology)
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122, mnu=0.11, neutrino_hierarchy="normal")
pars.InitPower.set_params(ns=0.965)
pars.set_matter_power(redshifts=[0.0], kmax=2.0)
results = camb.get_results(pars)
kh, z, pk = results.get_matter_power_spectrum(minkh=1e-4, maxkh=2, npoints=200)
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122, mnu=0.11, neutrino_hierarchy="inverted")
results = camb.get_results(pars)
kh2, z2, pk2 = results.get_matter_power_spectrum(minkh=1e-4, maxkh=2, npoints=200)
plt.semilogx(kh, pk2[0, :] / pk[0, :] - 1)
plt.ylabel(r"$\Delta$ PK/PK")
plt.xlabel(r"$k\, [h \,\rm{Mpc}^{-1}]$")
plt.title(r"Normal vs Inverted for $\sum m_\nu=0.11 \rm{eV}$");
```
```python
# Matter power functions can get other variables than the total density.
# For example look at the relative baryon-CDM velocity by using the power spectrum of
# model.Transfer_vel_baryon_cdm
kmax = 10
k_per_logint = 30
zs = [200, 500, 800, 1090]
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.InitPower.set_params(ns=0.965)
pars.WantTransfer = True
pars.set_matter_power(redshifts=zs, kmax=kmax, k_per_logint=k_per_logint, silent=True)
results = camb.get_results(pars)
PKint = results.get_matter_power_interpolator(
nonlinear=False,
hubble_units=False,
k_hunit=False,
var1=model.Transfer_vel_baryon_cdm,
var2=model.Transfer_vel_baryon_cdm,
)
```
```python
# Make plot like Fig 1 of https://arxiv.org/abs/1005.2416
# showing contribution to the CDM-baryon relative velocity variance per log k
plt.figure(figsize=(8, 5))
ks = np.logspace(-3, np.log10(kmax), 300)
for i, z in enumerate(zs):
curlyP = PKint.P(z, ks) * ks**3 / (2 * np.pi**2)
plt.semilogx(ks, curlyP)
rms = np.sqrt(curlyP[1:-1].dot((ks[2:] - ks[:-2]) / 2 / ks[1:-1]))
print("rms velocity at z=%s: %.3g" % (z, rms), "(%.3g km/s)" % (rms * 299792))
plt.xlim([1e-3, kmax])
plt.xlabel("k Mpc", fontsize=16)
plt.ylabel(r"$\mathcal{P}_v(k)$", fontsize=16)
plt.legend(["$z = %s$" % z for z in zs]);
```
```python
# You can also get the matter transfer functions
# These are synchronous gauge and normalized to unit primordial curvature perturbation
# The values stored in the array are quantities like Delta_x/k^2, and hence
# are nearly independent of k on large scales.
# Indices in the transfer_data array are the variable type, the k index, and the redshift index
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.InitPower.set_params(ns=0.965)
pars.set_matter_power(redshifts=[0], kmax=kmax)
results = camb.get_results(pars)
trans = results.get_matter_transfer_data()
# get kh - the values of k/h at which they are calculated
kh = trans.transfer_data[0, :, 0]
# transfer functions for different variables, e.g. CDM density and the Weyl potential
# CDM perturbations have grown, Weyl is O(1) of primordial value on large scales
delta = trans.transfer_data[model.Transfer_cdm - 1, :, 0]
W = trans.transfer_data[model.Transfer_Weyl - 1, :, 0]
plt.plot(kh, delta)
plt.loglog(kh, -W)
plt.xlabel(r"$k/h\, [\rm Mpc]^{-1}$", fontsize=16)
plt.title("Matter transfer functions")
plt.legend([r"$\Delta_c/k^2$", r"Weyl potential $\Psi$"], fontsize=14);
```
```python
# Check we can get the matter power spectrum from the transfer function as expected
k = kh * results.Params.h
transfer = trans.transfer_data[model.Transfer_tot - 1, :, 0]
primordial_PK = results.Params.scalar_power(k)
matter_power = primordial_PK * transfer**2 * k**4 / (k**3 / (2 * np.pi**2))
# compare with CAMB's explicit output for the matter power spectrum
kh2, zs, PK = results.get_linear_matter_power_spectrum(hubble_units=False)
plt.loglog(kh, matter_power)
plt.loglog(kh, PK[0, :])
plt.xlabel(r"$k\, [h Mpc^{-1}]$");
```
```python
# It is also possible to get the real-space linear perturbation variance in spheres, i.e. \sigma_R.
# Using R=8 and hubble_units=T would return the standard definition of \sigma_8
from scipy.interpolate import InterpolatedUnivariateSpline
from scipy.optimize import brentq
pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.InitPower.set_params(ns=0.965)
pars.set_matter_power(redshifts=[0, 1], kmax=kmax)
results = camb.get_results(pars)
R, z, sigma_R = results.get_sigmaR(R=np.arange(1, 20, 0.5), hubble_units=False, return_R_z=True)
plt.plot(R, sigma_R[1, :], label="z = %s" % z[1])
plt.plot(R, sigma_R[0, :], label="z = %s" % z[0])
plt.ylabel(r"$\sigma_R$", fontsize=14)
plt.xlabel(r"$R/{\rm Mpc}$", fontsize=14)
plt.legend()
# To get the non-linear scale where sigma_R=1, e.g. at z=0
sR = InterpolatedUnivariateSpline(R, sigma_R[-1, :] - 1)
R_nonlin = brentq(sR, R[0], R[-1])
print(r"R giving \sigma_R=1 at z=0 is at R=%.2f Mpc (or %.2f Mpc/h)" % (R_nonlin, R_nonlin * pars.h))
```
```python
# the above is for the default density variable delta_tot, without including
# massive neutrinos the result would be very slightly different
sigma_R_nonu = results.get_sigmaR(R=np.arange(1, 20, 0.5), var1="delta_nonu", var2="delta_nonu", hubble_units=False)
plt.plot(R, sigma_R_nonu[0, :] / sigma_R[0, :] - 1, label="z = %s" % z[0])
plt.plot(R, sigma_R_nonu[1, :] / sigma_R[1, :] - 1, label="z = %s" % z[1])
plt.ylabel(r"$\Delta\sigma_R/\sigma_R$", fontsize=14)
plt.xlabel(r"$R/{\rm Mpc}$", fontsize=14)
plt.legend();
```
```python
# results for power spectra using different initial power spectra
# can be computed without recomputing the transfer functions
pars = camb.set_params(
H0=67.5, ombh2=0.022, omch2=0.122, redshifts=[0], kmax=5, As=2e-9, ns=0.96, halofit_version="mead"
)
results = camb.get_transfer_functions(pars)
kref, _, PKref = results.get_linear_matter_power_spectrum(hubble_units=False, k_hunit=False)
kref, _, PKnl = results.get_nonlinear_matter_power_spectrum(hubble_units=False, k_hunit=False)
ns_arr = np.arange(0.94, 0.981, 0.01)
fig, axs = plt.subplots(1, 2, figsize=(14, 5), sharey=True)
for ns in ns_arr:
results.Params.InitPower.set_params(As=2e-9, ns=ns)
results.calc_power_spectra()
k, _, PK = results.get_linear_matter_power_spectrum(hubble_units=False, k_hunit=False)
np.testing.assert_allclose(k, kref)
axs[0].semilogx(k, PK[0, :] / PKref[0, :] - 1)
k, _, PK = results.get_nonlinear_matter_power_spectrum(hubble_units=False, k_hunit=False)
axs[1].semilogx(k, PK[0, :] / PKnl[0, :] - 1)
for ax, t in zip(axs, ["Linear", "Nonlinear"]):
ax.set_title("%s spectrum" % t)
ax.set_xlim(1e-3, 5)
# ax.set_ylim([-0.1,0.1])
ax.set_xlabel(r"$k\, [Mpc^{-1}]$")
plt.legend(["$n_s = %.2f$" % ns for ns in ns_arr], ncol=2, loc="lower left")
axs[0].set_ylabel(r"$\Delta P(k)/P(k)$", fontsize=16);
```
```python
# The non-linear model parameters can also be varied without recomputing the transfer functions
# eg. look at the effect of the HMcode baryonic feedback parameter
results = camb.get_transfer_functions(pars)
kref, _, PKnl = results.get_nonlinear_matter_power_spectrum(hubble_units=False, k_hunit=False)
feedbacks = np.arange(2, 4.1, 0.5)
for baryfeed in feedbacks:
results.Params.NonLinearModel.set_params(halofit_version="mead", HMCode_A_baryon=baryfeed)
results.calc_power_spectra()
k, _, PK = results.get_nonlinear_matter_power_spectrum(hubble_units=False, k_hunit=False)
np.testing.assert_allclose(k, kref)
plt.semilogx(k, PK[0, :] / PKnl[0, :] - 1)
plt.xlim(1e-2, 5)
plt.ylabel(r"$\Delta P(k)/P(k)$", fontsize=16)
plt.xlabel(r"$k\, [Mpc^{-1}]$")
plt.legend([r"$A_{\rm baryon} = %.2f$" % b for b in feedbacks]);
```
```python
# However non-linear lensing or other sources, or non-linearly lensed CMB requires
# recalculation from the time transfer functions. This is a bit slower but faster than recomputing everything.
# e.g. look at the impact of the baryon feedback parameter on the lensing potential power spectrum
pars = camb.set_params(H0=67.5, ombh2=0.022, omch2=0.122, ns=0.965, lens_potential_accuracy=1, lmax=3000)
results = camb.get_transfer_functions(pars, only_time_sources=True)
results.calc_power_spectra()
ref = results.get_lens_potential_cls()[:, 0]
feedbacks = np.arange(2, 4.1, 0.5)
for baryfeed in feedbacks:
results.Params.NonLinearModel.set_params(halofit_version="mead", HMCode_A_baryon=baryfeed)
results.calc_power_spectra()
CL = results.get_lens_potential_cls()[:, 0]
plt.semilogx(np.arange(2, CL.shape[0]), CL[2:] / ref[2:] - 1)
plt.legend([r"$A_{\rm baryon} = %.2f$" % b for b in feedbacks])
plt.ylabel(r"$\Delta C_L/C_L$", fontsize=16)
plt.xlabel("$L$")
plt.title("Impact of baryonic feedback on the lensing potential");
```
```python
# CAMB has a basic scalar field quintessence dark energy model
# EarlyQuintessence is a specific example implementation that implements early dark energy
# (axion-like, as arXiv:1908.06995) with potential V(\phi) = m^2f^2 (1 - cos(\phi/f))^2 + \Lambda
# AxionEffectiveFluid is an approximate model that does not evolve the quintessence equations
# To implement other models you'd need to make your own new subclass.
# Note that this is not as well tested as most of the rest of CAMB.
# Use n=3 and keep theta_* angular distance parameter fixed to roughly fit CMB data
thetastar = 0.01044341764253
n = 3
fig, axs = plt.subplots(3, 1, figsize=(10, 8))
zs = np.logspace(1, 5, 500)
pars = camb.set_params(ombh2=0.022, omch2=0.122, thetastar=thetastar) #
results = camb.get_results(pars)
print("LCDM: h0=", results.Params.H0)
cl_LCDM = results.get_lensed_scalar_cls(CMB_unit="muK")
axs[1].plot(cl_LCDM[2:, 0])
# Set dark energy fraction 0.1 at z=1e4
pars = camb.set_params(
ombh2=0.022,
omch2=0.122,
w_n=(n - 1.0) / (n + 1.0),
theta_i=np.pi / 2,
zc=1e4,
fde_zc=0.1,
dark_energy_model="AxionEffectiveFluid",
thetastar=thetastar,
) #
results = camb.get_results(pars)
print("AxionEffectiveFluid: h0 = ", results.Params.H0)
axs[0].semilogx(zs, results.get_Omega("de", z=zs))
cl = results.get_lensed_scalar_cls(CMB_unit="muK")
axs[0].set_ylabel(r"$\Omega_{\rm de}$")
axs[2].plot(cl[2:, 0] / cl_LCDM[2:, 0] - 1)
pars = camb.set_params(
ombh2=0.022,
omch2=0.122,
m=8e-53,
f=0.05,
n=n,
theta_i=3.1,
use_zc=True,
zc=1e4,
fde_zc=0.1,
dark_energy_model="EarlyQuintessence",
thetastar=thetastar,
) #
results = camb.get_results(pars)
print("EarlyQuintessence: h0 = ", results.Params.H0)
cl = results.get_lensed_scalar_cls(CMB_unit="muK")
axs[0].semilogx(zs, results.get_Omega("de", z=zs))
axs[0].set_xlabel(r"$z$")
axs[1].plot(cl[2:, 0])
axs[2].plot(cl[2:, 0] / cl_LCDM[2:, 0] - 1)
axs[1].set_ylabel(r"$D_l [\mu {\rm K}^2]$")
axs[2].set_ylabel(r"$\Delta D_l [\mu {\rm K}^2]$")
axs[1].set_xlabel(r"$\ell$")
axs[2].set_xlabel(r"$\ell$")
plt.tight_layout()
print("m = ", results.Params.DarkEnergy.m, "f = ", results.Params.DarkEnergy.f)
```
```python
# You can also calculate CMB correlation functions
# (the correlations module also has useful functions for the inverse transform,
# CMB lensing and derivatives of the lensed spectra - see docs)
from camb import correlations
pars = camb.set_params(H0=67.5, ombh2=0.022, omch2=0.122, As=2e-9, ns=0.96)
pars.set_for_lmax(4000, lens_potential_accuracy=1)
results = camb.get_results(pars)
lensed_cl = results.get_lensed_scalar_cls()
corrs, xvals, weights = correlations.gauss_legendre_correlation(lensed_cl)
r = np.arccos(xvals) * 180 / np.pi # sampled theta values in degrees
fig, axs = plt.subplots(2, 2, figsize=(12, 8))
for ix, (cap, ax) in enumerate(zip(["TT", "+", "-", r"\times"], axs.reshape(4))):
ax.plot(r, corrs[:, ix])
ax.axhline(0, color="k")
ax.set_xlim([0, 10])
ax.set_xlabel(r"$\theta$ [degrees]")
ax.set_ylabel(r"$\zeta_{%s}(\theta)$" % (cap), fontsize=14)
ax.set_xlim(0, 4)
plt.suptitle("Correlation functions");
```
```python
# For partially-delensed or Alens-scaled spectra, power spectra can be computed using
# a custom or scaled lensing potential power spectrum. There's a pure-python
# interface in the correlation module, or can use the result object functions
# (faster).
# Here just plot results for scaled lensing spectrum, can use
# get_lensed_cls_with_spectrum to calculate lensed spectrum with specific
# lensing power if needed. For BB the scaling is fairly linear, but less so for
# other spectra.
pars = camb.set_params(H0=67.5, ombh2=0.022, omch2=0.122, As=2e-9, ns=0.96)
pars.set_for_lmax(3500, lens_potential_accuracy=1)
results = camb.get_results(pars)
for Alens in [1, 0.9, 0.5, 0.1]:
plt.plot(np.arange(2, 2501), results.get_partially_lensed_cls(Alens, lmax=2500)[2:, 2], label="$A_L = %s$" % Alens)
plt.ylabel(r"$C_\ell^{BB}$")
plt.xlabel(r"$\ell$")
plt.xlim([2, 2500])
plt.legend();
```
```python
# For CMB lensing reconstruction, the non-perturbative response functions depend on the
# `gradient' C_l, which can be calculated in the flat sky approximation using CAMB
# On large scales similar to lensed CMB spectrum, but important difference esp for TT on small scales
# see arXiv:1906.08760, 1101.2234
c_lensed = results.get_lensed_scalar_cls()
c_lens_response = results.get_lensed_gradient_cls()
plt.plot(np.arange(2, 3501), c_lens_response[2:3501, 0] / c_lensed[2:3501, 0] - 1)
plt.ylabel(r"$\Delta C_l/C_l$")
plt.xlabel("$l$")
plt.xlim(2, 3500)
plt.title(r"Lensing gradient response vs lensed $C_l$");
```
CAMB also supports dark-age 21cm (see [astro-ph/0702600](https://arxiv.org/abs/astro-ph/0702600))
```python
# Get 21cm transfer functions, similar to Fig 4 of astro-ph/0702600
pars = camb.set_params(
cosmomc_theta=0.0104, ombh2=0.022, omch2=0.12, tau=0.053, As=2.08e-09, ns=0.968, num_massive_neutrinos=1
)
pars.Do21cm = True
pars.Evolve_delta_xe = True # ionization fraction perturbations
pars.Evolve_baryon_cs = True # accurate baryon perturbations
pars.WantCls = False
pars.SourceTerms.use_21cm_mK = False # Use dimensionless rather than mK units
redshifts = [50]
pars.set_matter_power(kmax=1000, redshifts=redshifts)
# get transfer functions
results = camb.get_results(pars)
trans = results.get_matter_transfer_data()
# get kh - the values of k/h at which they are calculated
k = trans.transfer_data[0, :, 0] * results.Params.h
for zix, z in enumerate(
redshifts,
):
# see https://camb.readthedocs.io/en/latest/transfer_variables.html
delta_c = trans.transfer_data[model.Transfer_cdm - 1, :, zix]
delta_b = trans.transfer_data[model.Transfer_b - 1, :, zix]
T_g = trans.transfer_data[model.Transfer_Tmat - 1, :, zix]
mono = trans.transfer_data[model.Transfer_monopole - 1, :, zix]
plt.loglog(k, mono * k**2, color="k", ls="-", lw=2)
plt.plot(k, delta_b * k**2, color="gray", ls="-")
plt.plot(k, T_g * k**2, color="b", ls="--")
plt.plot(k, delta_c * k**2, color="C0", ls="-.")
plt.xlabel(r"$k\, \rm Mpc$", fontsize=16)
plt.title(f"Transfer functions, z={redshifts}")
plt.legend([r"21cm monopole", r"$\Delta_b$", r"$\Delta_{T_g}$", r"$\Delta_c$"], fontsize=14)
plt.ylabel(r"$T_\chi$")
plt.xlim(1e-4, 1e3)
plt.ylim(1e-3, 1e4);
```
```python
# Reionization models can be changed using the reionization_model parameter (new in v1.5.1).
# The default tanh model is not very physical, there is a sample exponential model included as an alternative
# To define a new model inherit from the ReionizationModel classes already defined (in Fortran and Python)
# Plot the reionization histories and corresponding EE/BB spectra for various parameters
pdf_file = ""
pdf = None
if pdf_file:
from matplotlib.backends.backend_pdf import PdfPages
pdf = PdfPages(pdf_file)
z = np.logspace(-2, 3, 1000)
for exp_pow in [1, 1.2, 1.5, 2]:
fig, axs = plt.subplots(1, 2, figsize=(14, 6))
ax = axs[0]
for tau, c in zip((0.04, 0.055, 0.08, 0.12), ("k", "C0", "C1", "C2")):
As = 1.8e-9 * np.exp(2 * tau)
pars2 = camb.set_params(
H0=67.5,
ombh2=0.022,
omch2=0.122,
As=As,
ns=0.95,
r=0.001,
reionization_model="ExpReionization",
tau=tau,
reion_exp_power=exp_pow,
**{"Reion.timestep_boost": 1},
)
data2 = camb.get_results(pars2)
pars = camb.set_params(H0=67.5, ombh2=0.022, omch2=0.122, As=As, ns=0.95, tau=tau, r=0.001)
data = camb.get_results(pars)
eta = data.conformal_time(z)
eta2 = data2.conformal_time(z)
back_ev = data.get_background_time_evolution(eta, ["x_e", "visibility"])
back_ev2 = data2.get_background_time_evolution(eta2, ["x_e", "visibility"])
ax.plot(
z,
back_ev["x_e"],
ls="--",
color=c,
label=r"Tanh, $\tau =%.3f, z_{0.5} = %.3f$" % (tau, data.Params.Reion.redshift),
)
ax.plot(
z, back_ev2["x_e"], color=c, label=r"Exp, $\tau =%.3f, z_{0.5} = %.3f$" % (tau, data2.Params.Reion.redshift)
)
cl = data.get_total_cls()
cl2 = data2.get_total_cls()
axs[1].loglog(cl[2:, 1], ls="--", color=c)
axs[1].loglog(cl2[2:, 1], ls="-", color=c)
cl = data.get_lensed_scalar_cls()
cl2 = data2.get_lensed_scalar_cls()
axs[1].loglog(cl[2:, 2], ls=":", color=c)
axs[1].loglog(cl2[2:, 2], ls="-.", color=c)
cl = data.get_tensor_cls()
cl2 = data2.get_tensor_cls()
axs[1].loglog(cl[2:, 2], ls="--", color=c)
axs[1].loglog(cl2[2:, 2], ls="-", color=c)
axs[1].set_xlim((2, 250))
axs[1].set_title(r"$EE$/$BB$: $r=0.001$, $A_s e^{-2\tau}=1.8\times 10^{-9}$")
axs[1].set_xlabel(r"$\ell$")
axs[1].set_ylabel(r"$D_\ell$")
ax.set_xlim(0, 30)
ax.set_xlabel("$z$")
ax.set_ylabel("$x_e$")
ax.legend(fontsize=9)
ax.axvline(6.1, color="g", ls=":")
ax.set_title(r"$x_e(z>z_c) \propto e^{-\lambda (z-z_c)^p}$, $p=%s$" % exp_pow)
if pdf:
pdf.savefig()
```
### Animations
Going back to standard CMB, here is an example of how to make an animation of the evolution of
the transfer functions.
```python
# A potential issue here is that with large dynamic range, you may wish to plot modes where
# k*eta << 1 (long way outside horizon). Evolution is not normally calculated in the code a long
# way outside the horizon, starting instead with the series solution. So for results which are numerically
# unstable, you may need to replace the numerical result with the series result.
# Use widget to see animation in notebook rather than just saving (with "pip install ipympl")
# %matplotlib widget
import camb.symbolic as cs
pars = camb.read_ini("https://raw.githubusercontent.com/cmbant/CAMB/master/inifiles/planck_2018.ini")
data = camb.get_results(pars)
# get ranges to plot, evolving until recombination
zstar = data.get_derived_params()["zstar"]
etastar = data.conformal_time(zstar)
# stop time evolution (eta) at recombination
eta = np.logspace(-1.7, np.log10(etastar), 400)
# wide range of k
k = 10 ** np.linspace(-3, 1, 1200)
# get some background quantities
z = data.redshift_at_conformal_time(eta)
back_ev = data.get_background_time_evolution(eta, ["x_e", "visibility"])
x_e = back_ev["x_e"]
vis = back_ev["visibility"]
adotoa = data.h_of_z(z) / (1 + z) # comoving Hubble
# ratio of matter to radiation (neglecting neutrino mass)
rho = data.get_background_densities(1 / (1 + z))
Rmat = (rho["baryon"] + rho["cdm"]) / (rho["photon"] + rho["neutrino"] + rho["nu"])
# some quantities needed for superhorizon series solution -see series solutions in notes at
# https://cosmologist.info/notes/CAMB.pdf
grhonu = data.grhormass[0] + data.grhornomass
Rv = grhonu / (grhonu + data.grhog)
omega = (data.grhob + data.grhoc) / np.sqrt(3 * (data.grhog + grhonu))
# initial value of super-horizon Psi for unit curvature
Psi_init = -10 / (4 * Rv + 15)
# make evolution plot in Newtonian gauge. Use symbolic package to get right gauge-invariant quantities.
ev = data.get_time_evolution(
k,
eta,
[
cs.Psi_N, # Newtonian potential \Psi
"delta_photon", # sychronous gauge photon perturbation
cs.make_frame_invariant(cs.Delta_g, "Newtonian"), # photon perturbation
cs.make_frame_invariant(cs.v_b, "Newtonian"), # baryon velocity
],
lAccuracyBoost=4,
)
Psi = ev[:, :, 0]
Delta_g = ev[:, :, 1]
Delta_g_N = ev[:, :, 2]
v_b_N = ev[:, :, 3]
# Now replace Psi results well outside horizon where numerically unstable with series solution
# Note Newtonian-gauge Delta_g does not start at zero, so is also unstable
for i, etai in enumerate(eta):
for j, kj in enumerate(k):
if etai * kj < 0.1 and etai * omega < 0.1:
Psi[j, i] = Psi_init - 25 / 8 * omega * etai * (8 * Rv - 3) / (4 * Rv + 15) / (2 * Rv + 15)
# use Delta_g_N = Delta_g - 4H sigma/k (with adiabatic series result for sigma)
Delta_g_N[j, i] = Delta_g[j, i] - 4 * adotoa[i] * (
Psi_init / 2 * etai - 15 * omega * etai**2 / 8 * (4 * Rv - 5) / (4 * Rv + 15) / (2 * Rv + 15)
)
```
```python
# output animation (must have ffmpeg installed)
import math
from matplotlib import animation
def latex_sci_format(num, dec=3):
return "{:.{dec}f}\\times 10^{{{}}}".format(
num / (10 ** int(math.floor(math.log10(abs(num))))), int(math.floor(math.log10(abs(num)))), dec=dec
)
fig, ax = plt.subplots()
plot_vars = {
r"$\Psi$": Psi,
r"$\frac{1}{4}\Delta^{\rm Newt}_\gamma +\Psi$": Delta_g_N / 4 + Psi,
r"$\frac{1}{\sqrt{3}}v_b^{\rm Newt}$": v_b_N / np.sqrt(3),
}
anim_fps = 8
lines = []
for lab in plot_vars.keys():
(line,) = ax.semilogx([], [], label=lab)
lines.append(line)
ax.axhline(0, color="k")
plt.legend(loc="upper left")
ax.set_xlim(k[0], k[-1])
ax.set_ylim(-1, 1)
ax.set_xlabel(r"$k\, {\rm Mpc}$")
# Define the update function
def update(i):
ax.set_title(
r"$z= %s, \eta/{\rm Mpc}= %.2f, x_e = %.2f, \rho_m/\rho_{\rm rad}=%.2f$"
% (latex_sci_format(z[i]), eta[i], x_e[i], Rmat[i])
)
for line, var in zip(lines, plot_vars.values()):
line.set_data(k, var[:, i])
return lines
# Create the animation
transfer_anim = animation.FuncAnimation(fig, update, frames=range(len(eta)), blit=True)
writer = animation.writers["ffmpeg"](fps=anim_fps, bitrate=-1)
# can save like this
transfer_anim.save(r"Newtonian_transfer_evolve_monopole_phi_vb.mp4", writer=writer, dpi=240)
```
See the output mp4 file [here](https://cdn.cosmologist.info/antony/Newtonian_transfer_evolve_monopole_phi_vb.mp4) or play below:
```python
# or play inline using:
# from IPython.display import display, Video
# video = transfer_anim.to_html5_video()
# display(Video(data=video, embed=True, width = 600))
```
```python
# Animate the matter transfer and potential evolution up to today
# now use log scale and synchronous gauge as more relevant for the late-time matter
import camb.symbolic as cs
eta = np.logspace(-1.7, np.log10(data.tau0), 500)[:-1]
k = 10 ** np.linspace(-3, 2, 2000)
z = data.redshift_at_conformal_time(eta)
ev = data.get_time_evolution(
k,
eta,
[
cs.Psi_N,
"delta_cdm", # sychronous gauge cdm perturbation
"delta_photon",
"delta_baryon",
],
lAccuracyBoost=4,
)
Psi = ev[:, :, 0]
Delta_c = ev[:, :, 1]
Delta_g = ev[:, :, 2]
Delta_b = ev[:, :, 3]
x_e = data.get_background_time_evolution(eta, ["x_e"])["x_e"]
adotoa = data.h_of_z(z) / (1 + z) # comoving Hubble
rho = data.get_background_densities(1 / (1 + z))
Rmat = (rho["baryon"] + rho["cdm"]) / (rho["photon"] + rho["neutrino"] + rho["nu"])
zstar = data.get_derived_params()["zstar"]
# series for super-horizon Psi
grhonu = data.grhormass[0] + data.grhornomass
Rv = grhonu / (grhonu + data.grhog)
omega = (data.grhob + data.grhoc) / np.sqrt(3 * (data.grhog + grhonu))
# initial value of super-horizon Psi for unit curvature and no isocurvature
Psi_init = -10 / (4 * Rv + 15)
for i, etai in enumerate(eta):
for j, kj in enumerate(k):
if etai * kj < 0.1 and etai * omega < 0.1:
Psi[j, i] = Psi_init - 25 / 8 * omega * etai * (8 * Rv - 3) / (4 * Rv + 15) / (2 * Rv + 15)
```
```python
fig, ax = plt.subplots()
plot_vars = {r"$\Delta_c$": Delta_c, r"$\Delta_b$": Delta_b, r"$\Delta_g$": Delta_g, r"$\Psi$": Psi}
anim_fps = 8
lines = []
for lab in plot_vars.keys():
(line,) = ax.loglog([], [], label=lab)
lines.append(line)
plt.axhline(0, ls="-", color="k")
plt.legend(loc="upper left")
ax.set_xlim(k[0], k[-1])
ax.set_xlabel(r"$k\, {\rm Mpc}$")
ax.set_ylim(1e-4, 2e5)
lines.append(ax.loglog([], [], ls="--", color="C1")[0])
lines.append(ax.loglog([], [], ls="--", color="C2")[0])
lines.append(ax.loglog([], [], ls="--", color="C3")[0])
# Define the update function
def update(i):
ax.set_title(
r"$z= %s, \eta/{\rm Mpc}= %.2f, x_e = %.2f, \rho_m/\rho_{\rm rad}=%.2f$"
% (latex_sci_format(z[i], 2), eta[i], x_e[i], Rmat[i]),
fontsize=10,
)
for line, var in zip(lines, plot_vars.values()):
line.set_data(k, var[:, i])
lines[-3].set_data(k, -Delta_b[:, i])
lines[-2].set_data(k, -Delta_g[:, i])
lines[-1].set_data(k, -Psi[:, i])
if z[i] < zstar: # hide photons after recombination as irrelevant for matter
lines[2].set_data([], [])
lines[-2].set_data([], [])
return lines
# Create the animation
transfer_anim = animation.FuncAnimation(fig, update, frames=range(len(eta)), blit=True, repeat=False)
writer = animation.writers["ffmpeg"](fps=anim_fps, bitrate=-1)
transfer_anim.save(r"matter_transfer_evolve_sync.mp4", writer=writer, dpi=240)
```
This is the video produced of the synchronous transfer function evolution (dashed lines are negative):