pyphot package

Contents

pyphot package#

Subpackages#

Submodules#

pyphot.config module#

pyphot.helpers module#

STmag_from_flux(v)[source]#

Convert to ST magnitude from erg/s/cm2/AA (Flambda)

\[ \begin{align}\begin{aligned}mag = -2.5 \log_{10}(F) - 21.10\\M0 = 21.10 F0 = 3.6307805477010028 10^{-9} erg/s/cm2/AA\end{aligned}\end{align} \]
Parameters:

v (np.ndarray[float, ndim=N], or float) – array of fluxes

Returns:

mag – array of magnitudes

Return type:

np.ndarray[float, ndim=N], or float

STmag_to_flux(v)[source]#

Convert an ST magnitude to erg/s/cm2/AA (Flambda)

\[ \begin{align}\begin{aligned}mag = -2.5 \log_{10}(F) - 21.10\\M0 = 21.10 F0 = 3.6307805477010028 10^{-9} erg/s/cm2/AA\end{aligned}\end{align} \]
Parameters:

v (np.ndarray[float, ndim=N] or float) – array of magnitudes

Returns:

flux – array of fluxes

Return type:

np.ndarray[float, ndim=N], or float

deprecated(message)[source]#

Deprecated warning decorator

extractPhotometry(lamb, spec, flist, absFlux=True, progress=True)[source]#

Extract seds from a one single spectrum

Parameters:
  • lamb (ndarray[float,ndim=1]) – wavelength of spec

  • spec (ndarray[float, ndim=1]) – spectrum

  • flist (list[filter]) – list of filter objects

  • absflux (bool) – return SEDs in absolute fluxes if set

  • progress (bool) – show progression if set

Returns:

  • cls (ndarray[float, ndim=1]) – filters central wavelength

  • seds (ndarray[float, ndim=1]) – integrated sed

extractSEDs(lamb, specs, flist, absFlux=True, progress=True)[source]#

Extract seds from a grid

Parameters:
  • g0 (ModelGrid instance) – initial spectral grid

  • flist (sequence(filter)) – list of filter object instances

  • absflux (bool) – return SEDs in absolute fluxes if set

  • progress (bool) – show progression if set

Returns:

  • cls (ndarray[float, ndim=1]) – filters central wavelength

  • seds (ndarray[float, ndim=1]) – integrated sed

  • grid (Table) – SED grid properties table from g0 (g0.grid)

fluxErrTomag(flux, fluxerr)[source]#

Return the magnitudes and associated errors from fluxes and flux error values

Parameters:
  • flux (np.ndarray[float, ndim=1]) – array of fluxes

  • fluxerr (np.ndarray[float, ndim=1]) – array of flux errors

Returns:

  • mag (np.ndarray[float, ndim=1]) – array of magnitudes

  • err (np.ndarray[float, ndim=1]) – array of magnitude errors

fluxToMag(flux)[source]#

Return the magnitudes from flux values

Parameters:

flux (np.ndarray[float, ndim=N]) – array of fluxes

Returns:

mag – array of magnitudes

Return type:

np.ndarray[float, ndim=N]

magErrToFlux(mag, err)[source]#

Return the flux and associated errors from magnitude and mag error values

Parameters:
  • mag (np.ndarray[float, ndim=1]) – array of magnitudes

  • err (np.ndarray[float, ndim=1]) – array of magnitude errors

Returns:

  • flux (np.ndarray[float, ndim=1]) – array of fluxes

  • fluxerr (np.ndarray[float, ndim=1]) – array of flux errors

magToFlux(mag)[source]#

Return the flux from magnitude values

Parameters:

mag (np.ndarray[float, ndim=N]) – array of magnitudes

Returns:

flux – array of fluxes

Return type:

np.ndarray[float, ndim=N]

progress_enumerate(it, *args, **kwargs)[source]#

Enumerate over a sequence with progression if requested

Parameter#

show_progress: bool

set to show progress

pyphot.licks module#

Lick indices calculations

This package provides function to compute spectral indices

A collection of many common indices is available in licks.dat

The Lick system of spectral line indices is one of the most commonly used methods of determining ages and metallicities of unresolved (integrated light) stellar populations.

The calibration of the Lick/ IDS system is complicated because the original Lick spectra were not flux calibrated, so there are usually systematic effects due to differences in continuum shape. Proper calibration involves observing many of the original Lick/IDS standard stars and deriving offsets to the standard system.

References

Worthey G., Faber S. M., Gonzalez J. J., Burstein D., 1994, ApJS, 94, 687 Worthey G., Ottaviani D. L., 1997, ApJS, 111, 377 Puzia et al. 2002 Zhang, Li & Han 2005, http://arxiv.org/abs/astro-ph/0508634v1

Notes

In Vazdekis et al. (2010), we propose a new Line Index System, hereafter LIS, with three new spectral resolutions at which to measure the Lick indices. Note that this new system should not be restricted to the Lick set of indices in a flux calibrated system. In fact, LIS can be used for any index in the literature (e.g., for the Rose (1984) indices), including newly defined indices (e.g., Cervantes & Vazdekis 2009).

The LIS system is defined for 3 different spectral resolutions which are best suited for the following astrophysical cases:

LIS-5.0AA: globular clusters LIS-8.4AA: low and intermediate-mass galaxies LIS-14.0AA: massive galaxies Conversions to transform the data from the Lick/IDS system to LIS can be found

discussion of indices and information Johansson, Thomas & Maraston 2010 http://wwwmpa.mpa-garching.mpg.de/~jonasj/milesff/milesff.pdf

class LickIndex[source]#

Bases: object

Define a Lick Index similarily to a Filter object

__init__(name, lick, unit='AA')[source]#

Constructor

Parameters:
  • name (str) – name of the index

  • lick (dict) – expecting ‘blue’, ‘red’, ‘band’, and ‘unit’ definitions blue and red are used to continuum normalize the spectra band covers the index itself. unit gives the index measurement units, either magnitudes (mag) or equivalent width (ew)

  • unit (str) – wavelength unit of the intervals

property band#

Unitwise band definition

property blue#

Unitwise band definition

classmethod continuum_normalized_region_around_line(wi, fi, blue, red, band=None, degree=1)[source]#

cut out and normalize flux around a line

Parameters:
  • wi (ndarray (nw, )) – array of wavelengths in AA

  • fi (ndarray (N, nw)) – array of flux values for different spectra in the series

  • blue (tuple(2)) – selection for blue continuum estimate

  • red (tuple(2)) – selection for red continuum estimate

  • band (tuple(2), optional) – select region in this band only. default is band = (min(blue), max(red))

  • degree (int) – degree of the polynomial fit to the continuum

Returns:

  • wnew (ndarray (nw1, )) – wavelength of the selection in AA

  • f (ndarray (N, len(wnew))) – normalized flux in the selection region

Example

# indice of CaII
# wavelength are always supposed in AA
w, f = region_around_line(
    wavelength, flux, [3925, 3930],[3938, 3945]]
    )
get(wave, flux, **kwargs)[source]#

compute spectral index after continuum subtraction

Parameters:
  • w (ndarray (nw, )) – array of wavelengths in AA

  • flux (ndarray (N, nw)) – array of flux values for different spectra in the series

  • degree (int (default 1)) – degree of the polynomial fit to the continuum

  • nocheck (bool) – set to silently pass on spectral domain mismatch. otherwise raises an error when index is not covered

Returns:

ew – equivalent width or magnitude array

Return type:

ndarray (N,)

Raises:
  • ValueError – when the spectral coverage wave does not cover the index:

  • range

property index_unit#
info()[source]#

display information about the current Index

property red#

Unitwise band definition

to_dict()[source]#

return a dictionary of the current index

class LickLibrary[source]#

Bases: object

Collection of Lick indices

__init__(fname=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/licks.dat'), comment='#')[source]#
property content#
property description#

any comment in the input file

find(name, case_sensitive=True)[source]#
get_library_content()[source]#
reduce_resolution(wi, fi, fwhm0=0.55, sigma_floor=0.2)[source]#

Adapt the resolution of the spectra to match the lick definitions

Lick definitions have different resolution elements as function of wavelength. These definition are hard-coded in this function

Parameters:
  • wi (ndarray (n, )) – wavelength definition

  • fi (ndarray (nspec, n) or (n, )) – spectra to convert

  • fwhm0 (float) – initial broadening in the spectra fi

  • sigma_floor (float) – minimal dispersion to consider

Returns:

flux_red – reduced spectra

Return type:

ndarray (nspec, n) or (n, )

pyphot.pbar module#

Simple progressbar#

This package implement a unique progress bar class that can be used to decorate an iterator, a function or even standalone.

The format of the meter is flexible and can display along with the progress meter, the running time, an eta, and the rate of the iterations.

An example is::

description [———-] k/n 10% [time: 00:00:00, eta: 00:00:00, 2.7 iters/sec]

class Pbar[source]#

Bases: object

make a progress string in a shape of:

[----------] k/n  10% [time: 00:00:00, eta: 00:00:00, 2.7 iters/sec]
time#

if set, add the runtime information

Type:

bool, optional (default: True)

eta#

if set, add an estimated time to completion

Type:

bool, optional (default: True)

rate#

if set, add the rate information

Type:

bool, optional (default: True)

length#

number of characters showing the progress meter itself if None, the meter will adapt to the buffer width

TODO: make it variable with the buffer length

Type:

int, optional (default: None)

keep#

If not set, deletes its traces from screen after completion

Type:

bool, optional (default: True)

file#

the buffer to write into

Type:

buffer

mininterval#

minimum time in seconds between two updates of the meter

Type:

float (default: 0.5)

miniters#

minimum iteration number between two updates of the meter

Type:

int, optional (default: 1)

units#

unit of the iteration

Type:

str, optional (default: ‘iters’)

__init__(maxval=None, desc=None, time=True, eta=True, rate=True, length=None, file=None, keep=True, mininterval=0.5, miniters=1, units='iters', **kwargs)[source]#
build_str_meter(n, total, elapsed)[source]#

make a progress string in a shape of:

[----------] k/n  10% [time: 00:00:00, eta: 00:00:00, 2.7 iters/sec]
Parameters:
  • n (int) – number of finished iterations

  • total (int) – total number of iterations, or None

  • elapsed (int) – number of seconds passed since start

Returns:

txt – string representing the meter

Return type:

str

decorator(func)[source]#

Provide a function decorator allowing for counting calls and rates

static format_interval(t)[source]#

make a human readable time interval decomposed into days, hours, minutes and seconds

Parameters:

t (int) – interval in seconds

Returns:

txt – string representing the interval (format: <days>d <hrs>:<min>:<sec>)

Return type:

str

handle_resize(signum, frame)[source]#
iterover(iterable, total=None)[source]#

Get an iterable object, and return an iterator which acts exactly like the iterable, but prints a progress meter and updates it every time a value is requested.

Parameters:
  • iterable (generator or iterable object) – object to iter over.

  • total (int, optional) – the number of iterations is assumed to be the length of the iterator. But sometimes the iterable has no associated length or its length is not the actual number of future iterations. In this case, total can be set to define the number of iterations.

Returns:

gen – pass the values from the initial iterator

Return type:

generator

print_status(s)[source]#

print a status s on the last file line and clean the rest of the line

Parameters:

s (str) – message to write

update(n, desc=None, total=None)[source]#

Kept for backward compatibility and the decorator feature

Parameters:
  • n (int) – force iteration number n

  • desc (str) – update description string

  • total (int) – update the total number of iterations

pyphot.phot module#

Photometric package#

Defines a Filter class and associated functions to extract photometry.

This also include functions to keep libraries up to date

Note

integrations are done using trapezoid() Why not Simpsons? Simpsons principle is to take sequence of 3 points to make a quadratic interpolation. Which in the end, when filters have sharp edges, the error due to this “interpolation” are extremely large in comparison to the uncertainties induced by trapeze integration.

class Ascii_Library[source]#

Bases: Library

Interface one or multiple directory or many files as a filter library

>>> lib = Ascii_Library(['ground', 'hst', 'myfilter.csv'])
__init__(source)[source]#

Construct the library

add_filters(filter_object, fmt='%.6f', **kwargs)[source]#

Add a filter to the library permanently

Parameters:

filter_object (Filter object) – filter to add

get_library_content()[source]#

get the content of the library

load_all_filters(interp=True, lamb=None)[source]#

load all filters from the library

load_filters(names, interp=True, lamb=None, filterLib=None)[source]#

load a limited set of filters

Parameters:
  • names (list[str]) – normalized names according to filtersLib

  • interp (bool) – reinterpolate the filters over given lambda points

  • lamb (ndarray[float, ndim=1]) – desired wavelength definition of the filter

  • filterLib (path) – path to the filter library hd5 file

Returns:

filters – list of filter objects

Return type:

list[filter]

class Constants[source]#

Bases: object

A namespace for constants

c = <Quantity(2.99792458e+18, 'angstrom / second')>#
h = <Quantity(6.62607554e-27, 'erg * second')>#
class Filter[source]#

Bases: object

Class filter

Define a filter by its name, wavelength and transmission The type of detector (energy or photon counter) can be specified for adapting calculations. (default: photon)

name#

name of the filter

Type:

str

norm#

normalization factor of the filter

Type:

float

transmit#

transmission curve of the filter

Type:

ndarray

dtype#

detector type, either “photon” or “energy” counter

Type:

str

unit#

wavelength units

Type:

str

property AB_zero_Jy#

AB flux zero point in Jansky (Jy)

property AB_zero_flux#

AB flux zero point in erg/s/cm2/AA

property AB_zero_mag#

AB magnitude zero point

ABmag = -2.5 * log10(f_nu) - 48.60

= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts

property ST_zero_Jy#

ST flux zero point in Jansky (Jy)

property ST_zero_flux#

ST flux zero point in erg/s/cm2/AA

property ST_zero_mag#

ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1

property Vega_zero_Jy#

Vega flux zero point in Jansky (Jy)

property Vega_zero_flux#

Vega flux zero point in erg/s/cm2/AA

property Vega_zero_mag#

vega magnitude zero point vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) vegamag = -2.5 * log10(f_lamb) - zpts

property Vega_zero_photons#

Vega number of photons per wavelength unit

Note

see self.get_Nphotons

__init__(wavelength, transmit, name='', dtype='photon', unit=None)[source]#

Constructor

applyTo(slamb, sflux)[source]#

For compatibility but bad name

apply_transmission(slamb, sflux)[source]#

Apply filter transmission to a spectrum (with reinterpolation of the filter)

Parameters:
  • slamb (ndarray) – spectrum wavelength definition domain

  • sflux (ndarray) – associated flux

Returns:

flux – new spectrum values accounting for the filter

Return type:

float

property cl#

Unitwise wavelength definition

classmethod from_ascii(fname, dtype='csv', **kwargs)[source]#

Load filter from ascii file

property fwhm#

the difference between the two wavelengths for which filter transmission is half maximum

..note::

This calculation is not exact but rounded to the nearest passband data points

getFlux(slamb, sflux, axis=-1)[source]#

Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux

Returns:

flux – Energy of the spectrum within the filter

Return type:

float

get_Nphotons(slamb, sflux, axis=-1)[source]#

getNphot the number of photons through the filter (Ntot / width in the documentation)

getflux() * leff / hc

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux in erg/s/cm2/AA

Returns:

N – Number of photons of the spectrum within the filter

Return type:

float

get_flux(slamb, sflux, axis=-1)[source]#

getFlux Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux

Returns:

flux – Energy of the spectrum within the filter

Return type:

float

info(show_zeropoints=True)[source]#

display information about the current filter

property leff#

Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)

property lmax#

Calculated as the last value with a transmission at least 1% of maximum transmission

property lmin#

Calculate das the first value with a transmission at least 1% of maximum transmission

property lphot#

Photon distribution based effective wavelength. Defined as

lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)

which we calculate as

lphot = get_flux(lamb * vega) / get_flux(vega)

property lpivot#

Unitwise wavelength definition

classmethod make_integration_filter(lmin, lmax, name='', dtype='photon', unit=None)[source]#

Generate an heavyside filter between lmin and lmax

reinterp(lamb)[source]#

reinterpolate filter onto a different wavelength definition

set_dtype(dtype)[source]#

Set the detector type (photon or energy)

set_wavelength_unit(unit)[source]#

Set the wavelength units

to_Table(**kwargs)[source]#

Export filter to a SimpleTable object

Parameters:
  • fname (str) – filename

  • parameters (Uses SimpleTable)

to_dict()[source]#

Return a dictionary of the filter

property wavelength#

Unitwise wavelength definition

property width#

Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.

W = int(T dlamb) / max(T)

write_to(fname, **kwargs)[source]#

Export filter to a file

Parameters:
  • fname (str) – filename

  • parameters (Uses SimpleTable.write)

class HDF_Library[source]#

Bases: Library

Storage based on HDF

__init__(source=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/new_filters.hd5'), mode='r')[source]#

Construct the library

add_filter(f, **kwargs)[source]#

Add a filter to the library permanently

Parameters:

f (Filter object) – filter to add

get_library_content()[source]#

get the content of the library

load_all_filters(interp=True, lamb=None)[source]#

load all filters from the library

Parameters:
  • interp (bool) – reinterpolate the filters over given lambda points

  • lamb (ndarray[float, ndim=1]) – desired wavelength definition of the filter

Returns:

filters – list of filter objects

Return type:

list[filter]

load_filters(names, interp=True, lamb=None, filterLib=None)[source]#

load a limited set of filters

Parameters:
  • names (list[str]) – normalized names according to filtersLib

  • interp (bool) – reinterpolate the filters over given lambda points

  • lamb (ndarray[float, ndim=1]) – desired wavelength definition of the filter

  • filterLib (path) – path to the filter library hd5 file

Returns:

filters – list of filter objects

Return type:

list[filter]

class Library[source]#

Bases: object

Common grounds for filter libraries

__init__(source=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/new_filters.hd5'), *args, **kwargs)[source]#

Construct the library

add_filter(f)[source]#

add a filter to the library

property content#

Get the content list

find(name, case_sensitive=True)[source]#
classmethod from_ascii(filename, **kwargs)[source]#
classmethod from_hd5(filename, **kwargs)[source]#
get_library_content()[source]#

get the content of the library

load_all_filters(interp=True, lamb=None)[source]#

load all filters from the library

to_csv(directory='./', progress=True, **kwargs)[source]#

Export each filter into a csv file with its own name

Parameters:
  • directory (str) – directory to write into

  • progress (bool) – show progress if set

to_hdf(fname='filters.hd5', progress=True, **kwargs)[source]#

Export each filter into a csv file with its own name

Parameters:
  • directory (str) – directory to write into

  • progress (bool) – show progress if set

class UncertainFilter[source]#

Bases: Filter

What could be a filter with uncertainties

mean#

mean passband transmission

Type:

Filter

samples#

samples from the uncertain passband transmission model

Type:

sequence(Filter)

name#

name of the passband

Type:

string

dtype#

detector type, either “photon” or “energy” counter

Type:

str

unit#

wavelength units

Type:

str

property AB_zero_Jy#

AB flux zero point in Jansky (Jy)

property AB_zero_flux#

AB flux zero point in erg/s/cm2/AA

property AB_zero_mag#

AB magnitude zero point

ABmag = -2.5 * log10(f_nu) - 48.60

= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts

property ST_zero_Jy#

ST flux zero point in Jansky (Jy)

property ST_zero_flux#

ST flux zero point in erg/s/cm2/AA

property ST_zero_mag#

ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1

property Vega_zero_Jy#

Vega flux zero point in Jansky (Jy)

property Vega_zero_flux#

Vega flux zero point in erg/s/cm2/AA

property Vega_zero_mag#

Vega magnitude zero point Vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) Vegamag = -2.5 * log10(f_lamb) - zpts

property Vega_zero_photons#

Vega number of photons per wavelength unit

Note

see self.get_Nphotons

__init__(wavelength, mean_transmit, samples, name='', dtype='photon', unit=None)[source]#

Constructor

apply_transmission(slamb, sflux)[source]#

Apply filter transmission to a spectrum (with reinterpolation of the filter)

Parameters:
  • slamb (ndarray) – spectrum wavelength definition domain

  • sflux (ndarray) – associated flux

Returns:

flux – new spectrum values accounting for the filter

Return type:

float

property cl#

Unitwise wavelength definition

classmethod from_ascii(fname, dtype='csv', **kwargs)[source]#

Load filter from ascii file

classmethod from_gp_model(model, xprime=None, n_samples=10, **kwargs)[source]#

Generate a filter object from a sklearn GP model

Parameters:
  • model (sklearn.gaussian_process.GaussianProcessRegressor) – model of the passband

  • xprime (ndarray) – wavelength to express the model in addition to the training points

  • n_samples (int) – number of samples to generate from the model.

  • kwawrgs (dict) – UncertainFilter keywords

property fwhm#

the difference between the two wavelengths for which filter transmission is half maximum

..note:

This calculation is not exact but rounded to the nearest passband
data points
getFlux(slamb, sflux, axis=-1)[source]#

Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux

Returns:

flux – Energy of the spectrum within the filter

Return type:

float

get_Nphotons(slamb, sflux, axis=-1)[source]#

getNphot the number of photons through the filter (Ntot / width in the documentation)

getflux() * leff / hc

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux in erg/s/cm2/AA

Returns:

N – Number of photons of the spectrum within the filter

Return type:

float

info(show_zeropoints=True)[source]#

display information about the current filter

property leff#

Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)

property lmax#

Calculated as the last value with a transmission at least 1% of maximum transmission

property lmin#

Calculate das the first value with a transmission at least 1% of maximum transmission

property lphot#

Photon distribution based effective wavelength. Defined as

lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)

which we calculate as

lphot = get_flux(lamb * vega) / get_flux(vega)

property lpivot#

Unitwise wavelength definition

reinterp(lamb)[source]#

reinterpolate filter onto a different wavelength definition

set_dtype(dtype)[source]#

Set the detector type (photon or energy)

set_wavelength_unit(unit)[source]#

Set the wavelength units

to_Table(**kwargs)[source]#

Export filter to a SimpleTable object

Parameters:
  • fname (str) – filename

  • parameters (Uses SimpleTable)

property transmit#

Transmission curves

property wavelength#

Unitwise wavelength definition

property wavelength_unit#

Unit wavelength definition

property width#

Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.

W = int(T dlamb) / max(T)

get_library(fname=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/new_filters.hd5'), **kwargs)[source]#

Finds the appropriate class to load the library

class set_method_default_units[source]#

Bases: object

Decorator for classmethods that makes sure that the inputs of slamb, sflux are in given units

expects the decorated method to be defined as

>> def methodname(self, lamb, flux)

__init__(wavelength_unit, flux_unit, output_unit=None)[source]#
classmethod force_units(value, unit)[source]#

pyphot.sandbox module#

Photometric package using Astropy Units#

Defines a Filter class and associated functions to extract photometry.

This also include functions to keep libraries up to date

Note

integrations are done using trapezoid() Why not Simpsons? Simpsons principle is to take sequence of 3 points to make a quadratic interpolation. Which in the end, when filters have sharp edges, the error due to this “interpolation” are extremely large in comparison to the uncertainties induced by trapeze integration.

class Constants[source]#

Bases: object

A namespace for constants

c = <Quantity(2.99792458e+18, 'angstrom / second')>#
h = <Quantity(6.62607554e-27, 'erg * second')>#
class UncertainFilter[source]#

Bases: UnitFilter

What could be a filter with uncertainties

mean#

mean passband transmission

Type:

Filter

samples#

samples from the uncertain passband transmission model

Type:

sequence(Filter)

name#

name of the passband

Type:

string

dtype#

detector type, either “photon” or “energy” counter

Type:

str

unit#

wavelength units

Type:

str

property AB_zero_Jy#

AB flux zero point in Jansky (Jy)

property AB_zero_flux#

AB flux zero point in erg/s/cm2/AA

property AB_zero_mag#

AB magnitude zero point

ABmag = -2.5 * log10(f_nu) - 48.60

= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts

property ST_zero_Jy#

ST flux zero point in Jansky (Jy)

property ST_zero_flux#

ST flux zero point in erg/s/cm2/AA

property ST_zero_mag#

ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1

property Vega_zero_Jy#

Vega flux zero point in Jansky (Jy)

property Vega_zero_flux#

Vega flux zero point in erg/s/cm2/AA

property Vega_zero_mag#

Vega magnitude zero point Vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) Vegamag = -2.5 * log10(f_lamb) - zpts

property Vega_zero_photons#

Vega number of photons per wavelength unit

Note

see self.get_Nphotons

__init__(wavelength, mean_transmit, samples, name='', dtype='photon', unit=None)[source]#

Constructor

apply_transmission(slamb, sflux)[source]#

Apply filter transmission to a spectrum (with reinterpolation of the filter)

Parameters:
  • slamb (ndarray) – spectrum wavelength definition domain

  • sflux (ndarray) – associated flux

Returns:

flux – new spectrum values accounting for the filter

Return type:

float

property cl#

Unitwise wavelength definition

classmethod from_ascii(fname, dtype='csv', **kwargs)[source]#

Load filter from ascii file

classmethod from_gp_model(model, xprime=None, n_samples=10, **kwargs)[source]#

Generate a filter object from a sklearn GP model

Parameters:
  • model (sklearn.gaussian_process.GaussianProcessRegressor) – model of the passband

  • xprime (ndarray) – wavelength to express the model in addition to the training points

  • n_samples (int) – number of samples to generate from the model.

  • kwargs (dict) – UncertainFilter keywords

property fwhm#

the difference between the two wavelengths for which filter transmission is half maximum

..note::

This calculation is not exact but rounded to the nearest passband data points

getFlux(slamb, sflux, axis=-1)[source]#

Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux

Returns:

flux – Energy of the spectrum within the filter

Return type:

float

get_Nphotons(slamb, sflux, axis=-1)[source]#

getNphot the number of photons through the filter (Ntot / width in the documentation)

getflux() * leff / hc

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux in erg/s/cm2/AA

Returns:

N – Number of photons of the spectrum within the filter

Return type:

float

info(show_zeropoints=True)[source]#

display information about the current filter

property leff#

Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)

property lmax#

Calculated as the last value with a transmission at least 1% of maximum transmission

property lmin#

Calculate das the first value with a transmission at least 1% of maximum transmission

property lphot#

Photon distribution based effective wavelength. Defined as

lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)

which we calculate as

lphot = get_flux(lamb * vega) / get_flux(vega)

property lpivot#

Unitwise wavelength definition

reinterp(lamb)[source]#

reinterpolate filter onto a different wavelength definition

set_dtype(dtype)[source]#

Set the detector type (photon or energy)

set_wavelength_unit(unit)[source]#

Set the wavelength units

to_Table(**kwargs)[source]#

Export filter to a SimpleTable object

Parameters:
  • fname (str) – filename

  • parameters (Uses SimpleTable)

property transmit#

Transmission curves

property wavelength#

Unitwise wavelength definition

property wavelength_unit#

Unit wavelength definition

property width#

Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.

W = int(T dlamb) / max(T)

class UnitAscii_Library[source]#

Bases: UnitLibrary

Interface one or multiple directory or many files as a filter library

>>> lib = Ascii_Library(['ground', 'hst', 'myfilter.csv'])
__init__(source)[source]#

Construct the library

add_filters(filter_object, fmt='%.6f', **kwargs)[source]#

Add a filter to the library permanently

Parameters:

filter_object (Filter object) – filter to add

get_library_content()[source]#

get the content of the library

load_all_filters(interp=True, lamb=None)[source]#

load all filters from the library

load_filters(names, interp=True, lamb=None, filterLib=None)[source]#

load a limited set of filters

Parameters:
  • names (list[str]) – normalized names according to filtersLib

  • interp (bool) – reinterpolate the filters over given lambda points

  • lamb (ndarray[float, ndim=1]) – desired wavelength definition of the filter

  • filterLib (path) – path to the filter library hd5 file

Returns:

filters – list of filter objects

Return type:

list[filter]

class UnitFilter[source]#

Bases: object

Evolution of Filter that makes sure the input spectra and output fluxes have units to avoid mis-interpretation.

Note the usual (non SI) units of flux definitions:

flam = erg/s/cm**2/AA fnu = erg/s/cm**2/Hz photflam = photon/s/cm**2/AA photnu = photon/s/cm**2/Hz

Define a filter by its name, wavelength and transmission The type of detector (energy or photon counter) can be specified for adapting calculations. (default: photon)

name#

name of the filter

Type:

str

norm#

normalization factor of the filter

Type:

float

transmit#

transmission curve of the filter

Type:

ndarray

dtype#

detector type, either “photon” or “energy” counter

Type:

str

unit#

wavelength units

Type:

str

property AB_zero_Jy#

AB flux zero point in Jansky (Jy)

property AB_zero_flux#

AB flux zero point in erg/s/cm2/AA

property AB_zero_mag#

AB magnitude zero point

ABmag = -2.5 * log10(f_nu) - 48.60

= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts

property ST_zero_Jy#

ST flux zero point in Jansky (Jy)

property ST_zero_flux#

ST flux zero point in erg/s/cm2/AA

property ST_zero_mag#

ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1

property Vega_zero_Jy#

Vega flux zero point in Jansky (Jy)

property Vega_zero_flux#

Vega flux zero point in erg/s/cm2/AA

property Vega_zero_mag#

vega magnitude zero point vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) vegamag = -2.5 * log10(f_lamb) - zpts

property Vega_zero_photons#

Vega number of photons per wavelength unit

Note

see self.get_Nphotons

__init__(wavelength, transmit, name='', dtype='photon', unit=None)[source]#

Constructor

applyTo(slamb, sflux)[source]#

For compatibility but bad name

apply_transmission(slamb, sflux)[source]#

Apply filter transmission to a spectrum (with reinterpolation of the filter)

Parameters:
  • slamb (ndarray) – spectrum wavelength definition domain

  • sflux (ndarray) – associated flux

Returns:

flux – new spectrum values accounting for the filter

Return type:

float

property cl#

Unitwise wavelength definition

classmethod from_ascii(fname, dtype='csv', **kwargs)[source]#

Load filter from ascii file

property fwhm#

the difference between the two wavelengths for which filter transmission is half maximum

..note::

This calculation is not exact but rounded to the nearest passband data points

getFlux(slamb, sflux, axis=-1)[source]#

Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux

Returns:

flux – Energy of the spectrum within the filter

Return type:

float

get_Nphotons(slamb, sflux, axis=-1)[source]#

getNphot the number of photons through the filter (Ntot / width in the documentation)

getflux() * leff / hc

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux in erg/s/cm2/AA

Returns:

N – Number of photons of the spectrum within the filter

Return type:

float

get_flux(slamb, sflux, axis=-1)[source]#

getFlux Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.

Parameters:
  • slamb (ndarray(dtype=float, ndim=1)) – spectrum wavelength definition domain

  • sflux (ndarray(dtype=float, ndim=1)) – associated flux

Returns:

flux – Energy of the spectrum within the filter

Return type:

float

info(show_zeropoints=True)[source]#

display information about the current filter

property leff#

Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)

property lmax#

Calculated as the last value with a transmission at least 1% of maximum transmission

property lmin#

Calculate das the first value with a transmission at least 1% of maximum transmission

property lphot#

Photon distribution based effective wavelength. Defined as

lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)

which we calculate as

lphot = get_flux(lamb * vega) / get_flux(vega)

property lpivot#

Unitwise wavelength definition

classmethod make_integration_filter(lmin, lmax, name='', dtype='photon', unit=None)[source]#

Generate an heavyside filter between lmin and lmax

reinterp(lamb)[source]#

reinterpolate filter onto a different wavelength definition

set_dtype(dtype)[source]#

Set the detector type (photon or energy)

set_wavelength_unit(unit)[source]#

Set the wavelength units

to_Table(**kwargs)[source]#

Export filter to a SimpleTable object

Parameters:
  • fname (str) – filename

  • parameters (Uses SimpleTable)

to_dict()[source]#

Return a dictionary of the filter

property wavelength#

Unitwise wavelength definition

property width#

Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.

W = int(T dlamb) / max(T)

write_to(fname, **kwargs)[source]#

Export filter to a file

Parameters:
  • fname (str) – filename

  • parameters (Uses SimpleTable.write)

class UnitHDF_Library[source]#

Bases: UnitLibrary

Storage based on HDF

__init__(source=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/new_filters.hd5'), mode='r')[source]#

Construct the library

add_filter(f, **kwargs)[source]#

Add a filter to the library permanently

Parameters:

f (Filter object) – filter to add

get_library_content()[source]#

get the content of the library

load_all_filters(interp=True, lamb=None)[source]#

load all filters from the library

Parameters:
  • interp (bool) – reinterpolate the filters over given lambda points

  • lamb (ndarray[float, ndim=1]) – desired wavelength definition of the filter

Returns:

filters – list of filter objects

Return type:

list[filter]

load_filters(names, interp=True, lamb=None, filterLib=None)[source]#

load a limited set of filters

Parameters:
  • names (list[str]) – normalized names according to filtersLib

  • interp (bool) – reinterpolate the filters over given lambda points

  • lamb (ndarray[float, ndim=1]) – desired wavelength definition of the filter

  • filterLib (path) – path to the filter library hd5 file

Returns:

filters – list of filter objects

Return type:

list[filter]

class UnitLibrary[source]#

Bases: object

Common grounds for filter libraries

__init__(source=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/new_filters.hd5'), *args, **kwargs)[source]#

Construct the library

add_filter(f)[source]#

add a filter to the library

property content#

Get the content list

find(name, case_sensitive=True)[source]#
classmethod from_ascii(filename, **kwargs)[source]#
classmethod from_hd5(filename, **kwargs)[source]#
get_library_content()[source]#

get the content of the library

load_all_filters(interp=True, lamb=None)[source]#

load all filters from the library

to_csv(directory='./', progress=True, **kwargs)[source]#

Export each filter into a csv file with its own name

Parameters:
  • directory (str) – directory to write into

  • progress (bool) – show progress if set

to_hdf(fname='filters.hd5', progress=True, **kwargs)[source]#

Export each filter into a csv file with its own name

Parameters:
  • directory (str) – directory to write into

  • progress (bool) – show progress if set

class UnitLickIndex[source]#

Bases: LickIndex

Define a Lick Index similarily to a Filter object

get(wave, flux, **kwargs)[source]#

compute spectral index after continuum subtraction

Parameters:
  • w (ndarray (nw, )) – array of wavelengths in AA

  • flux (ndarray (N, nw)) – array of flux values for different spectra in the series

  • degree (int (default 1)) – degree of the polynomial fit to the continuum

  • nocheck (bool) – set to silently pass on spectral domain mismatch. otherwise raises an error when index is not covered

Returns:

ew – equivalent width or magnitude array

Return type:

ndarray (N,)

Raises:
  • ValueError – when the spectral coverage wave does not cover the index:

  • range

class UnitLickLibrary[source]#

Bases: LickLibrary

Collection of Lick indices

__init__(fname=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/licks.dat'), comment='#')[source]#
property content#
property description#

any comment in the input file

find(name, case_sensitive=True)[source]#
get_library_content()[source]#
get_library(fname=PosixPath('/home/runner/work/pyphot/pyphot/pyphot/libs/new_filters.hd5'), **kwargs)[source]#

Finds the appropriate class to load the library

hasUnit(val)[source]#

Check is an object has units

reduce_resolution(wi, fi, fwhm0=<Quantity(0.55, 'angstrom')>, sigma_floor=<Quantity(0.2, 'angstrom')>)[source]#

Adapt the resolution of the spectra to match the lick definitions

Lick definitions have different resolution elements as function of wavelength. These definition are hard-coded in this function

Parameters:
  • wi (ndarray (n, )) – wavelength definition

  • fi (ndarray (nspec, n) or (n, )) – spectra to convert

  • fwhm0 (float) – initial broadening in the spectra fi

  • sigma_floor (float) – minimal dispersion to consider

Returns:

flux_red – reduced spectra

Return type:

ndarray (nspec, n) or (n, )

class set_method_default_units[source]#

Bases: object

Decorator for classmethods that makes sure that the inputs of slamb, sflux are in given units

expects the decorated method to be defined as

>> def methodname(self, lamb, flux)

__init__(wavelength_unit, flux_unit, output_unit=None)[source]#
classmethod force_units(value, unit)[source]#

pyphot.simpletable module#

This file implements a Table class

that is designed to be the basis of any format

Requirements#

  • FIT format:
    • astropy:

      provides a replacement to pyfits pyfits can still be used instead but astropy is now the default

  • HDF5 format:
    • pytables

RuntimeError will be raised when writing to a format associated with missing package.

class AstroHelpers[source]#

Bases: object

Helpers related to astronomy data

static conesearch(ra0, dec0, ra, dec, r, outtype=0)[source]#

Perform a cone search on a table

Parameters:
  • ra0 (ndarray[ndim=1, dtype=float]) – column name to use as RA source in degrees

  • dec0 (ndarray[ndim=1, dtype=float]) – column name to use as DEC source in degrees

  • ra (float) – ra to look for (in degree)

  • dec (float) – ra to look for (in degree)

  • r (float) – distance in degrees

  • outtype (int) –

    type of outputs

    0 – minimal, indices of matching coordinates 1 – indices and distances of matching coordinates 2 – full, boolean filter and distances

Returns:

t

if outtype is 0:

only return indices from ra0, dec0

elif outtype is 1:

return indices from ra0, dec0 and distances

elif outtype is 2:

return conditional vector and distance to all ra0, dec0

Return type:

tuple

static deg2dms(val, delim=':')[source]#

Convert degrees into hex coordinates

Parameters:
  • deg (float) – angle in degrees

  • delimiter (str) – character delimiting the fields

Returns:

str – string to convert

Return type:

string or sequence

static deg2hms(val, delim=':')[source]#

Convert degrees into hex coordinates

Parameters:
  • deg (float) – angle in degrees

  • delimiter (str) – character delimiting the fields

Returns:

str – string to convert

Return type:

string or sequence

static dms2deg(_str, delim=':')[source]#

Convert hex coordinates into degrees

Parameters:
  • str (string or sequence) – string to convert

  • delimiter (str) – character delimiting the fields

Returns:

deg – angle in degrees

Return type:

float

static euler(ai_in, bi_in, select, b1950=False, dtype='f8')[source]#

Transform between Galactic, celestial, and ecliptic coordinates. Celestial coordinates (RA, Dec) should be given in equinox J2000 unless the b1950 is True.

select

From

To

select

From

To

1

RA-Dec (2000)

Galactic

4

Ecliptic

RA-Dec

2

Galactic

RA-DEC

5

Ecliptic

Galactic

3

RA-Dec

Ecliptic

6

Galactic

Ecliptic

Parameters:
  • long_in (float, or sequence) – Input Longitude in DEGREES, scalar or vector.

  • lat_in (float, or sequence) – Latitude in DEGREES

  • select (int) – Integer from 1 to 6 specifying type of coordinate transformation.

  • b1950 (bool) – set equinox set to 1950

Returns:

  • long_out (float, seq) – Output Longitude in DEGREES

  • lat_out (float, seq) – Output Latitude in DEGREES

Note

Written W. Landsman, February 1987 Adapted from Fortran by Daryl Yentis NRL Converted to IDL V5.0 W. Landsman September 1997 Made J2000 the default, added /FK4 keyword W. Landsman December 1998 Add option to specify SELECT as a keyword W. Landsman March 2003 Converted from IDL to numerical Python: Erin Sheldon, NYU, 2008-07-02

static hms2deg(_str, delim=':')[source]#

Convert hex coordinates into degrees

Parameters:
  • str (string or sequence) – string to convert

  • delimiter (str) – character delimiting the fields

Returns:

deg – angle in degrees

Return type:

float

static sphdist(ra1, dec1, ra2, dec2)[source]#

measures the spherical distance between 2 points

Parameters:
  • ra1 (float or sequence) – first right ascensions in degrees

  • dec1 (float or sequence) – first declination in degrees

  • ra2 (float or sequence) – second right ascensions in degrees

  • dec2 (float or sequence) – first declination in degrees

Returns:

Outputs – returns a distance in degrees

Return type:

float or sequence

class AstroTable[source]#

Bases: SimpleTable

Derived from the Table, this class add implementations of common astro tools especially conesearch

__init__(*args, **kwargs)[source]#
coneSearch(ra, dec, r, outtype=0)[source]#

Perform a cone search on a table

Parameters:
  • ra0 (ndarray[ndim=1, dtype=float]) – column name to use as RA source in degrees

  • dec0 (ndarray[ndim=1, dtype=float]) – column name to use as DEC source in degrees

  • ra (float) – ra to look for (in degree)

  • dec (float) – ra to look for (in degree)

  • r (float) – distance in degrees

  • outtype (int) –

    type of outputs

    0 – minimal, indices of matching coordinates 1 – indices and distances of matching coordinates 2 – full, boolean filter and distances

Returns:

t

if outtype is 0:

only return indices from ra0, dec0

elif outtype is 1:

return indices from ra0, dec0 and distances

elif outtype is 2:

return conditional vector and distance to all ra0, dec0

Return type:

tuple

get_DEC(degree=True)[source]#

Returns RA, converted from hexa/sexa into degrees

get_RA(degree=True)[source]#

Returns RA, converted from hexa/sexa into degrees

info()[source]#

prints information on the table

selectWhere(fields, condition=None, condvars=None, cone=None, zone=None, **kwargs)[source]#

Read table data fulfilling the given condition. Only the rows fulfilling the condition are included in the result. conesearch is also possible through the keyword cone formatted as (ra, dec, r) zonesearch is also possible through the keyword zone formatted as (ramin, ramax, decmin, decmax)

Combination of multiple selections is also available.

set_DEC(val)[source]#

Set the column that defines DEC coordinates

set_RA(val)[source]#

Set the column that defines RA coordinates

where(condition=None, condvars=None, cone=None, zone=None, **kwargs)[source]#

Read table data fulfilling the given condition. Only the rows fulfilling the condition are included in the result.

Parameters:
  • condition (str) – expression to evaluate on the table includes mathematical operations and attribute names

  • condvars (dictionary, optional) – A dictionary that replaces the local operands in current frame.

Returns:

  • out (ndarray/ tuple of ndarrays)

  • result equivalent to np.where()

zoneSearch(ramin, ramax, decmin, decmax, outtype=0)[source]#

Perform a zone search on a table, i.e., a rectangular selection

Parameters:
  • ramin (float) – minimal value of RA

  • ramax (float) – maximal value of RA

  • decmin (float) – minimal value of DEC

  • decmax (float) – maximal value of DEC

  • outtype (int) –

    type of outputs

    0 or 1 – minimal, indices of matching coordinates 2 – full, boolean filter and distances

Returns:

r – indices or conditional sequence of matching values

Return type:

sequence

class SimpleTable[source]#

Bases: object

Table class that is designed to be the basis of any format wrapping around numpy recarrays

fname#

if str, the file to read from. This may be limited to the format currently handled automatically. If the format is not correctly handled, you can try by providing an object.__

if object with a structure like dict, ndarray, or recarray-like

the data will be encapsulated into a Table

Type:

str or object

caseless#

if set, column names will be caseless during operations

Type:

bool

aliases#

set of column aliases (can be defined later set_alias())

Type:

dict

units#

set of column units (can be defined later set_unit())

Type:

dict

desc#

set of column description or comments (can be defined later set_comment())

Type:

dict

header#

key, value pair corresponding to the attributes of the table

Type:

dict

__init__(fname, *args, **kwargs)[source]#
addCol(name, data, dtype=None, unit=None, description=None)#

Add one or multiple columns to the table

Parameters:
  • name (str or sequence(str)) – The name(s) of the column(s) to add

  • data (ndarray, or sequence of ndarray) – The column data, or sequence of columns

  • dtype (dtype) – numpy dtype for the data to add

  • unit (str) – The unit of the values in the column

  • description (str) – A description of the content of the column

addLine(iterable)#

Append one row in this table.

see also: stack()

Parameters:

iterable (iterable) – line to add

add_column(name, data, dtype=None, unit=None, description=None)[source]#

Add one or multiple columns to the table

Parameters:
  • name (str or sequence(str)) – The name(s) of the column(s) to add

  • data (ndarray, or sequence of ndarray) – The column data, or sequence of columns

  • dtype (dtype) – numpy dtype for the data to add

  • unit (str) – The unit of the values in the column

  • description (str) – A description of the content of the column

append_row(iterable)[source]#

Append one row in this table.

see also: stack()

Parameters:

iterable (iterable) – line to add

property colnames#

Sequence of column names

compress(condition, axis=None, out=None)[source]#

Return selected slices of an array along given axis.

When working along a given axis, a slice along that axis is returned in output for each index where condition evaluates to True. When working on a 1-D array, compress is equivalent to extract.

Parameters:
  • condition (1-D array of bools) – Array that selects which entries to return. If len(condition) is less than the size of a along the given axis, then output is truncated to the length of the condition array.

  • axis (int, optional) – Axis along which to take slices. If None (default), work on the flattened array.

  • out (ndarray, optional) – Output array. Its type is preserved and it must be of the right shape to hold the output.

Returns:

compressed_array – A copy of a without the slices along axis for which condition is false.

Return type:

ndarray

delCol(names)#

Remove several columns from the table

Parameters:

names (sequence) – A list containing the names of the columns to remove

property dtype#

dtype of the data

property empty_row#

Return an empty row array respecting the table format

evalexpr(expr, exprvars=None, dtype=<class 'float'>)[source]#
evaluate expression based on the data and external variables

all np function can be used (log, exp, pi…)

Parameters:
  • expr (str) – expression to evaluate on the table includes mathematical operations and attribute names

  • exprvars (dictionary, optional) – A dictionary that replaces the local operands in current frame.

  • dtype (dtype definition) – dtype of the output array

Returns:

out – array of the result

Return type:

NumPy array

find_duplicate(index_only=False, values_only=False)[source]#

Find duplication in the table entries, return a list of duplicated elements Only works at this time is 2 lines are the same entry not if 2 lines have the same values

get(v, full_match=False)[source]#

returns a table from columns given as v

this function is equivalent to __getitem__() but preserve the Table format and associated properties (units, description, header)

Parameters:
  • v (str) – pattern to filter the keys with

  • full_match (bool) – if set, use re.fullmatch() instead of re.match()

groupby(*key)[source]#

Create an iterator which returns (key, sub-table) grouped by each value of key(value)

Parameters:

key (str) – expression or pattern to filter the keys with

Returns:

  • key (str or sequence) – group key

  • tab (SimpleTable instance) – sub-table of the group header, aliases and column metadata are preserved (linked to the master table).

info()[source]#

prints information on the table

items()[source]#

Iterator on the (key, value) pairs

iterkeys()[source]#

Iterator over the columns of the table

itervalues()[source]#

Iterator over the lines of the table

join_by(r2, key, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, asrecarray=False, asTable=True)[source]#

Join arrays r1 and r2 on key key.

The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm.

Parameters:
  • key (str or seq(str)) – corresponding to the fields used for comparison.

  • r2 (Table) – Table to join with

  • jointype (str in {'inner', 'outer', 'leftouter'}) –

    • ‘inner’ : returns the elements common to both r1 and r2.

    • ’outer’ : returns the common elements as well as the elements of r1 not in r2 and the elements of not in r2.

    • ’leftouter’ : returns the common elements and the elements of r1 not in r2.

  • r1postfix (str) – String appended to the names of the fields of r1 that are present in r2

  • r2postfix (str) – String appended to the names of the fields of r2 that are present in r1

  • defaults (dict) – Dictionary mapping field names to the corresponding default values.

Returns:

  • tab (Table) – joined table

  • .. note::

    • The output is sorted along the key.

    • A temporary array is formed by dropping the fields not in the key for the two arrays and concatenating the result. This array is then sorted, and the common entries selected. The output is constructed by filling the fields with the selected entries. Matching is not preserved if there are some duplicates…

keys(regexp=None, full_match=False)[source]#

Return the data column names or a subset of it

Parameters:
  • regexp (str) – pattern to filter the keys with

  • full_match (bool) – if set, use re.fullmatch() instead of re.match()

  • string (Try to apply the pattern at the start of the)

  • returning

  • object (a match)

  • found. (or None if no match was)

Returns:

seq – sequence of keys

Return type:

sequence

match(r2, key)[source]#

Returns the indices at which the tables match matching uses 2 columns that are compared in values

Parameters:
  • r2 (Table) – second table to use

  • key (str) – fields used for comparison.

Returns:

indexes – tuple of both indices list where the two columns match.

Return type:

tuple

property name#

name of the table given by the Header[‘NAME’] attribute

property nbytes#

number of bytes of the object

property ncols#

number of columns

property nrows#

number of lines

pop_columns(names)[source]#

Pop several columns from the table

Parameters:

names (sequence) – A list containing the names of the columns to remove

Returns:

values – list of columns

Return type:

tuple

pprint(idx=None, fields=None, ret=False, all=False, full_match=False, headerChar='-', delim=' | ', endline='\n', **kwargs)[source]#
Pretty print the table content

you can select the table parts to display using idx to select the rows and fields to only display some columns (ret is only for insternal use)

Parameters:
  • idx (sequence, slide) – sub selection to print

  • fields (str, sequence) – if str can be a regular expression, and/or list of fields separated by spaces or commas

  • ret (bool) – if set return the string representation instead of printing the result

  • all (bool) – if set, force to show all rows

  • headerChar (char) – Character to be used for the row separator line

  • delim (char) – The column delimiter.

pprint_entry(num, keys=None)[source]#

print one line with key and values properly to be readable

Parameters:
  • num (int, slice) – indice selection

  • keys (sequence or str) – if str, can be a regular expression if sequence, the sequence of keys to print

remove_column(names)#

Remove several columns from the table

Parameters:

names (sequence) – A list containing the names of the columns to remove

remove_columns(names)[source]#

Remove several columns from the table

Parameters:

names (sequence) – A list containing the names of the columns to remove

resolve_alias(colname)[source]#

Return the name of an aliased column.

Given an alias, return the column name it aliases. This function is a no-op if the alias is a column name itself.

Aliases are defined by using .define_alias()

reverse_alias(colname)[source]#

Return aliases of a given column.

Given a colname, return a sequence of aliases associated to this column Aliases are defined by using .define_alias()

select(fields, indices=None, **kwargs)[source]#

Select only a few fields in the table

Parameters:
  • fields (str or sequence) – fields to keep in the resulting table

  • indices (sequence or slice) – extract only on these indices

Returns:

tab – resulting table

Return type:

SimpleTable instance

selectWhere(fields, condition, condvars=None, **kwargs)[source]#
Read table data fulfilling the given condition.

Only the rows fulfilling the condition are included in the result.

Parameters:
  • fields (str or sequence) – fields to keep in the resulting table

  • condition (str) – expression to evaluate on the table includes mathematical operations and attribute names

  • condvars (dictionary, optional) – A dictionary that replaces the local operands in current frame.

Returns:

tab – resulting table

Return type:

SimpleTable instance

setComment(colname, comment)#

Set the comment of a column referenced by its name

Parameters:
  • colname (str) – column name or registered alias

  • comment (str) – column description

setUnit(colname, unit)#

Set the unit of a column referenced by its name

Parameters:
  • colname (str) – column name or registered alias

  • unit (str) – unit description

set_alias(alias, colname)[source]#

Define an alias to a column

Parameters:
  • alias (str) – The new alias of the column

  • colname (str) – The column being aliased

set_comment(colname, comment)[source]#

Set the comment of a column referenced by its name

Parameters:
  • colname (str) – column name or registered alias

  • comment (str) – column description

set_unit(colname, unit)[source]#

Set the unit of a column referenced by its name

Parameters:
  • colname (str) – column name or registered alias

  • unit (str) – unit description

property shape#

shape of the data

sort(keys, copy=False)[source]#

Sort the table inplace according to one or more keys. This operates on the existing table (and does not return a new table).

Parameters:
  • keys (str or seq(str)) – The key(s) to order by

  • copy (bool) – if set returns a sorted copy instead of working inplace

stack(r, *args, **kwargs)[source]#

Superposes arrays fields by fields inplace

t.stack(t1, t2, t3, default=None, inplace=True)

Parameters:

r (Table)

stats(fn=None, fields=None, fill=None)[source]#

Make statistics on columns of a table

Parameters:
  • fn (callable or sequence of callables) – functions to apply to each column default: (np.mean, np.std, np.nanmin, np.nanmax)

  • fields (str or sequence) – any key or key expression to subselect columns default is all columns

  • fill (value) – value when not applicable default np.nan

Returns:

tab – collection of statistics, one column per function in fn and 1 ligne per column in the table

Return type:

Table instance

take(indices, axis=None, out=None, mode='raise')[source]#

Take elements from an array along an axis.

This function does the same thing as “fancy” indexing (indexing arrays using arrays); however, it can be easier to use if you need elements along a given axis.

Parameters:
  • indices (array_like) – The indices of the values to extract. Also allow scalars for indices.

  • axis (int, optional) – The axis over which to select values. By default, the flattened input array is used.

  • out (ndarray, optional) – If provided, the result will be placed in this array. It should be of the appropriate shape and dtype.

  • mode ({'raise', 'wrap', 'clip'}, optional) –

    Specifies how out-of-bounds indices will behave.

    • ’raise’ – raise an error (default)

    • ’wrap’ – wrap around

    • ’clip’ – clip to the range

    ’clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.

Returns:

subarray – The returned array has the same type as a.

Return type:

ndarray

to_astropy_table(**kwargs)[source]#

A class to represent tables of heterogeneous data.

astropy.table.Table provides a class for heterogeneous tabular data, making use of a numpy structured array internally to store the data values. A key enhancement provided by the Table class is the ability to easily modify the structure of the table by adding or removing columns, or adding new rows of data. In addition table and column metadata are fully supported.

Parameters:
  • masked (bool, optional) – Specify whether the table is masked.

  • names (list, optional) – Specify column names

  • dtype (list, optional) – Specify column data types

  • meta (dict, optional) – Metadata associated with the table.

  • copy (bool, optional) – Copy the input data (default=True).

  • rows (numpy ndarray, list of lists, optional) – Row-oriented data for table instead of data argument

  • copy_indices (bool, optional) – Copy any indices in the input data (default=True)

  • kwargs (dict, optional) – Additional keyword args when converting table-like object

Returns:

df – dataframe

Return type:

astropy.table.Table

to_dask(**kwargs)[source]#

Construct a Dask DataFrame

This splits an in-memory Pandas dataframe into several parts and constructs a dask.dataframe from those parts on which Dask.dataframe can operate in parallel.

Note that, despite parallelism, Dask.dataframe may not always be faster than Pandas. We recommend that you stay with Pandas for as long as possible before switching to Dask.dataframe.

Parameters:
  • keys (sequence, optional) – ordered subset of columns to export

  • npartitions (int, optional) – The number of partitions of the index to create. Note that depending on the size and index of the dataframe, the output may have fewer partitions than requested.

  • chunksize (int, optional) – The size of the partitions of the index.

  • sort (bool) – Sort input first to obtain cleanly divided partitions or don’t sort and don’t get cleanly divided partitions

  • name (string, optional) – An optional keyname for the dataframe. Defaults to hashing the input

Returns:

A dask DataFrame/Series partitioned along the index

Return type:

dask.DataFrame or dask.Series

to_dict(keys=None, contiguous=False)[source]#

Construct a dictionary from this dataframe with contiguous arrays

Parameters:
  • keys (sequence, optional) – ordered subset of columns to export

  • contiguous (boolean) – make sure each value is a contiguous numpy array object (C-aligned)

Returns:

data – converted data

Return type:

dict

to_pandas(**kwargs)[source]#

Construct a pandas dataframe

Parameters:
  • data (ndarray) – (structured dtype), list of tuples, dict, or DataFrame

  • keys (sequence, optional) – ordered subset of columns to export

  • index (string, list of fields, array-like) – Field of array to use as the index, alternately a specific set of input labels to use

  • exclude (sequence, default None) – Columns or fields to exclude

  • columns (sequence, default None) – Column names to use. If the passed data do not have names associated with them, this argument provides names for the columns. Otherwise this argument indicates the order of the columns in the result (any names not found in the data will become all-NA columns)

  • coerce_float (boolean, default False) – Attempt to convert values to non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets

Returns:

df

Return type:

DataFrame

to_records(**kwargs)[source]#

Construct a numpy record array from this dataframe

to_vaex(**kwargs)[source]#

Create an in memory Vaex dataset

Parameters:
  • name (str) – unique for the dataset

  • keys (sequence, optional) – ordered subset of columns to export

Returns:

df – vaex dataset

Return type:

vaex.DataSetArrays

to_xarray(**kwargs)[source]#

Construct an xarray dataset

Each column will be converted into an independent variable in the Dataset. If the dataframe’s index is a MultiIndex, it will be expanded into a tensor product of one-dimensional indices (filling in missing values with NaN). This method will produce a Dataset very similar to that on which the ‘to_dataframe’ method was called, except with possibly redundant dimensions (since all dataset variables will have the same dimensionality).

where(condition, condvars=None, *args, **kwargs)[source]#

Read table data fulfilling the given condition. Only the rows fulfilling the condition are included in the result.

Parameters:
  • condition (str) – expression to evaluate on the table includes mathematical operations and attribute names

  • condvars (dictionary, optional) – A dictionary that replaces the local operands in current frame.

Returns:

  • out (ndarray/ tuple of ndarrays)

  • result equivalent to np.where()

write(fname, **kwargs)[source]#

write table into file

Parameters:
  • fname (str) – filename to export the table into

  • note:: (..) – additional keywords are forwarded to the corresponding libraries pyfits.writeto() or pyfits.append() np.savetxt()

class stats[source]#

Bases: object

classmethod has_nan(v)[source]#
classmethod max(v)[source]#
classmethod mean(v)[source]#
classmethod min(v)[source]#
classmethod p16(v)[source]#
classmethod p50(v)[source]#
classmethod p84(v)[source]#
classmethod std(v)[source]#
classmethod var(v)[source]#

pyphot.sun module#

Handle the Sun Spectrum

class Sun[source]#

Bases: object

Class that handles the Sun’s spectrum and references.

Observed solar spectrum comes from: ftp://ftp.stsci.edu/cdbs/current_calspec/sun_reference_stis_001.fits

and theoretical spectrum comes from: ftp://ftp.stsci.edu/cdbs/grid/k93models/standards/sun_kurucz93.fits

The theoretical spectrum is scaled to match the observed spectrum from 1.5 - 2.5 microns, and then it is used where the observed spectrum ends. The theoretical model of the Sun from Kurucz’93 atlas using the following parameters when the Sun is at 1 au.

log_Z T_eff log_g V_{Johnson} +0.0 5777 +4.44 -26.75

source#

filename of the sun library

Type:

str

data#

data table

Type:

SimpleTable

units#

detected units from file header

Type:

tuple

distance#

distance to the observed Sun (default, 1 au)

Type:

float

flavor#

either ‘observed’ using the stis reference, or ‘theoretical’ for the Kurucz model.

Type:

str, (default theoretical)

__init__(source=None, distance=<Quantity(1, 'astronomical_unit')>, flavor='theoretical')[source]#

Constructor

property flux#
property wavelength#

pyphot.svo module#

Link to the SVO filter profile service

http://svo2.cab.inta-csic.es/theory/fps/

If your research benefits from the use of the SVO Filter Profile Service, include the following acknowledgement in your publication:

> This research has made use of the SVO Filter Profile Service > (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO > through grant AYA2017-84089.

and please include the following references in your publication:

Example

>>> lst = "2MASS/2MASS.J 2MASS/2MASS.H 2MASS/2MASS.Ks HST/ACS_WFC.F475W HST/ACS_WFC.F814W".split()
    objects = [get_pyphot_filter(k) for k in lst]
get_pyphot_astropy_filter(identifier)[source]#

Query the SVO filter profile service and return the filter object

Parameters:

identifier (str) – SVO identifier of the filter profile e.g., 2MASS/2MASS.Ks HST/ACS_WFC.F475W The identifier is the first column on the webpage of the facilities.

Returns:

filter – Filter object

Return type:

pyphot.astropy.UnitFilter

get_pyphot_filter(identifier)[source]#

Query the SVO filter profile service and return the filter object

Parameters:

identifier (str) – SVO identifier of the filter profile e.g., 2MASS/2MASS.Ks HST/ACS_WFC.F475W The identifier is the first column on the webpage of the facilities.

Returns:

filter – Filter object

Return type:

pyphot.astropy.UnitFilter

pyphot.vega module#

Handle vega spec/mags/fluxes manipulations

Works with both ascii and hd5 files for back-compatibility

Vega.wavelength and Vega.flux have now units!

class Vega[source]#

Bases: object

Class that handles vega spectrum and references. This class know where to find the Vega synthetic spectrum (Bohlin 2007) in order to compute fluxes and magnitudes in given filters

source#

filename of the vega library

Type:

str

data#

data table

Type:

SimpleTable

units#

detected units from file header

Type:

tuple

An instance can be used as a context manager as:

>>> filters = ['HST_WFC3_F275W', 'HST_WFC3_F336W', 'HST_WFC3_F475W',                   'HST_WFC3_F814W', 'HST_WFC3_F110W', 'HST_WFC3_F160W']
    with Vega() as v:
        vega_f, vega_mag, flamb = v.getSed(filters)
    print vega_f, vega_mag, flamb
__init__(source='/home/runner/work/pyphot/pyphot/pyphot/libs/vega.hd5')[source]#

Constructor

property flux#

flux(wavelength) values (with units when provided)

getFlux(filters)[source]#

Return vega abs. fluxes in filters

getMag(filters)[source]#

Return vega abs. magnitudes in filters

property wavelength#

wavelength (with units when found)

from_Vegamag_to_Flux(lamb, vega_mag)[source]#

function decorator that transforms vega magnitudes to fluxes (without vega reference)

from_Vegamag_to_Flux_SN_errors(lamb, vega_mag)[source]#

function decorator that transforms vega magnitudes to fluxes (without vega reference)

pyphot.version module#

Module contents#