pyphot package¶
Subpackages¶
Submodules¶
pyphot.config module¶
pyphot.helpers module¶
-
pyphot.helpers.
STmag_from_flux
(v)[source]¶ Convert to ST magnitude from erg/s/cm2/AA (Flambda)
\[ \begin{align}\begin{aligned}mag = -2.5 \log_{10}(F) - 21.10\\M0 = 21.10 F0 = 3.6307805477010028 10^{-9} erg/s/cm2/AA\end{aligned}\end{align} \]- v: np.ndarray[float, ndim=N], or float
array of fluxes
- mag: np.ndarray[float, ndim=N], or float
array of magnitudes
-
pyphot.helpers.
STmag_to_flux
(v)[source]¶ Convert an ST magnitude to erg/s/cm2/AA (Flambda)
\[ \begin{align}\begin{aligned}mag = -2.5 \log_{10}(F) - 21.10\\M0 = 21.10 F0 = 3.6307805477010028 10^{-9} erg/s/cm2/AA\end{aligned}\end{align} \]- v: np.ndarray[float, ndim=N] or float
array of magnitudes
- flux: np.ndarray[float, ndim=N], or float
array of fluxes
-
pyphot.helpers.
extractPhotometry
(lamb, spec, flist, absFlux=True, progress=True)[source]¶ Extract seds from a one single spectrum
- lamb: ndarray[float,ndim=1]
wavelength of spec
- spec: ndarray[float, ndim=1]
spectrum
- flist: list[filter]
list of filter objects
- absflux: bool
return SEDs in absolute fluxes if set
- progress: bool
show progression if set
- cls: ndarray[float, ndim=1]
filters central wavelength
- seds: ndarray[float, ndim=1]
integrated sed
-
pyphot.helpers.
extractSEDs
(lamb, specs, flist, absFlux=True, progress=True)[source]¶ Extract seds from a grid
- g0: ModelGrid instance
initial spectral grid
- flist: sequence(filter)
list of filter object instances
- absflux: bool
return SEDs in absolute fluxes if set
- progress: bool
show progression if set
- cls: ndarray[float, ndim=1]
filters central wavelength
- seds: ndarray[float, ndim=1]
integrated sed
- grid: Table
SED grid properties table from g0 (g0.grid)
-
pyphot.helpers.
fluxErrTomag
(flux, fluxerr)[source]¶ Return the magnitudes and associated errors from fluxes and flux error values
- flux: np.ndarray[float, ndim=1]
array of fluxes
- fluxerr: np.ndarray[float, ndim=1]
array of flux errors
- mag: np.ndarray[float, ndim=1]
array of magnitudes
- err: np.ndarray[float, ndim=1]
array of magnitude errors
-
pyphot.helpers.
fluxToMag
(flux)[source]¶ Return the magnitudes from flux values
- flux: np.ndarray[float, ndim=N]
array of fluxes
- mag: np.ndarray[float, ndim=N]
array of magnitudes
-
pyphot.helpers.
magErrToFlux
(mag, err)[source]¶ Return the flux and associated errors from magnitude and mag error values
- mag: np.ndarray[float, ndim=1]
array of magnitudes
- err: np.ndarray[float, ndim=1]
array of magnitude errors
- flux: np.ndarray[float, ndim=1]
array of fluxes
- fluxerr: np.ndarray[float, ndim=1]
array of flux errors
pyphot.licks module¶
Lick indices calculations
This package provides function to compute spectral indices
A collection of many common indices is available in licks.dat
The Lick system of spectral line indices is one of the most commonly used methods of determining ages and metallicities of unresolved (integrated light) stellar populations.
The calibration of the Lick/ IDS system is complicated because the original Lick spectra were not flux calibrated, so there are usually systematic effects due to differences in continuum shape. Proper calibration involves observing many of the original Lick/IDS standard stars and deriving offsets to the standard system.
Todo
fix units: all must be internally computed in AA, flux are not check for per AA
-
class
pyphot.licks.
LickIndex
(name, lick, unit='AA')[source]¶ Bases:
object
Define a Lick Index similarily to a Filter object
-
property
band
¶ Unitwise band definition
-
property
blue
¶ Unitwise band definition
-
classmethod
continuum_normalized_region_around_line
(wi, fi, blue, red, band=None, degree=1)[source]¶ cut out and normalize flux around a line
- wi: ndarray (nw, )
array of wavelengths in AA
- fi: ndarray (N, nw)
array of flux values for different spectra in the series
- blue: tuple(2)
selection for blue continuum estimate
- red: tuple(2)
selection for red continuum estimate
- band: tuple(2), optional
select region in this band only. default is band = (min(blue), max(red))
- degree: int
degree of the polynomial fit to the continuum
- wnew: ndarray (nw1, )
wavelength of the selection in AA
- f: ndarray (N, len(wnew))
normalized flux in the selection region
-
get
(wave, flux, **kwargs)[source]¶ compute spectral index after continuum subtraction
- w: ndarray (nw, )
array of wavelengths in AA
- flux: ndarray (N, nw)
array of flux values for different spectra in the series
- degree: int (default 1)
degree of the polynomial fit to the continuum
- nocheck: bool
set to silently pass on spectral domain mismatch. otherwise raises an error when index is not covered
- ew: ndarray (N,)
equivalent width or magnitude array
ValueError: when the spectral coverage wave does not cover the index range
-
property
index_unit
¶
-
property
red
¶ Unitwise band definition
-
property
-
class
pyphot.licks.
LickLibrary
(fname='/github/workspace/pyphot/libs/licks.dat', comment='#')[source]¶ Bases:
object
Collection of Lick indices
-
property
content
¶
-
property
description
¶ any comment in the input file
-
property
-
pyphot.licks.
reduce_resolution
(wi, fi, fwhm0=0.55, sigma_floor=0.2)[source]¶ Adapt the resolution of the spectra to match the lick definitions
Lick definitions have different resolution elements as function of wavelength. These definition are hard-coded in this function
- wi: ndarray (n, )
wavelength definition
- fi: ndarray (nspec, n) or (n, )
spectra to convert
- fwhm0: float
initial broadening in the spectra fi
- sigma_floor: float
minimal dispersion to consider
- flux_red: ndarray (nspec, n) or (n, )
reduced spectra
pyphot.pbar module¶
Simple progressbar¶
This package implement a unique progress bar class that can be used to decorate an iterator, a function or even standalone.
The format of the meter is flexible and can display along with the progress meter, the running time, an eta, and the rate of the iterations.
- An example is::
description [———-] k/n 10% [time: 00:00:00, eta: 00:00:00, 2.7 iters/sec]
-
class
pyphot.pbar.
Pbar
(maxval=None, desc=None, time=True, eta=True, rate=True, length=None, file=None, keep=True, mininterval=0.5, miniters=1, units='iters', **kwargs)[source]¶ Bases:
object
make a progress string in a shape of:
[----------] k/n 10% [time: 00:00:00, eta: 00:00:00, 2.7 iters/sec]
- time: bool, optional (default: True)
if set, add the runtime information
- eta: bool, optional (default: True)
if set, add an estimated time to completion
- rate: bool, optional (default: True)
if set, add the rate information
- length: int, optional (default: None)
number of characters showing the progress meter itself if None, the meter will adapt to the buffer width
TODO: make it variable with the buffer length
- keep: bool, optional (default: True)
If not set, deletes its traces from screen after completion
- file: buffer
the buffer to write into
- mininterval: float (default: 0.5)
minimum time in seconds between two updates of the meter
- miniters: int, optional (default: 1)
minimum iteration number between two updates of the meter
- units: str, optional (default: ‘iters’)
unit of the iteration
-
build_str_meter
(n, total, elapsed)[source]¶ make a progress string in a shape of:
[----------] k/n 10% [time: 00:00:00, eta: 00:00:00, 2.7 iters/sec]
- n: int
number of finished iterations
- total: int
total number of iterations, or None
- elapsed: int
number of seconds passed since start
- txt: str
string representing the meter
-
static
format_interval
(t)[source]¶ make a human readable time interval decomposed into days, hours, minutes and seconds
- t: int
interval in seconds
- txt: str
string representing the interval (format: <days>d <hrs>:<min>:<sec>)
-
iterover
(iterable, total=None)[source]¶ Get an iterable object, and return an iterator which acts exactly like the iterable, but prints a progress meter and updates it every time a value is requested.
- iterable: generator or iterable object
object to iter over.
- total: int, optional
the number of iterations is assumed to be the length of the iterator. But sometimes the iterable has no associated length or its length is not the actual number of future iterations. In this case, total can be set to define the number of iterations.
- gen: generator
pass the values from the initial iterator
pyphot.phot module (deprecated)¶
use either sandbox or astropy submodules
Photometric package¶
Defines a Filter class and associated functions to extract photometry.
This also include functions to keep libraries up to date
Note
integrations are done using trapezoid()
Why not Simpsons? Simpsons principle is to take sequence of 3 points to
make a quadratic interpolation. Which in the end, when filters have sharp
edges, the error due to this “interpolation” are extremely large in
comparison to the uncertainties induced by trapeze integration.
-
class
pyphot.phot.
Ascii_Library
(source)[source]¶ Bases:
pyphot.phot.Library
Interface one or multiple directory or many files as a filter library
>>> lib = Ascii_Library(['ground', 'hst', 'myfilter.csv'])
-
add_filters
(filter_object, fmt='%.6f', **kwargs)[source]¶ Add a filter to the library permanently
- filter_object: Filter object
filter to add
-
load_filters
(names, interp=True, lamb=None, filterLib=None)[source]¶ load a limited set of filters
- names: list[str]
normalized names according to filtersLib
- interp: bool
reinterpolate the filters over given lambda points
- lamb: ndarray[float, ndim=1]
desired wavelength definition of the filter
- filterLib: path
path to the filter library hd5 file
- filters: list[filter]
list of filter objects
-
-
class
pyphot.phot.
Constants
[source]¶ Bases:
object
A namespace for constants
-
c
= <Quantity(2.99792458e+18, 'angstrom / second')>¶
-
h
= <Quantity(6.62607554e-27, 'erg * second')>¶
-
-
class
pyphot.phot.
Filter
(wavelength, transmit, name='', dtype='photon', unit=None)[source]¶ Bases:
object
Class filter
Define a filter by its name, wavelength and transmission The type of detector (energy or photon counter) can be specified for adapting calculations. (default: photon)
- name: str
name of the filter
- cl: float
central wavelength of the filter
- norm: float
normalization factor of the filter
- lpivot: float
pivot wavelength of the filter
- wavelength: ndarray
wavelength sequence defining the filter transmission curve
- transmit: ndarray
transmission curve of the filter
- dtype: str
detector type, either “photon” or “energy” counter
- unit: str
wavelength units
-
property
AB_zero_Jy
¶ AB flux zero point in Jansky (Jy)
-
property
AB_zero_flux
¶ AB flux zero point in erg/s/cm2/AA
-
property
AB_zero_mag
¶ AB magnitude zero point ABmag = -2.5 * log10(f_nu) - 48.60
= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts
-
property
ST_zero_Jy
¶ ST flux zero point in Jansky (Jy)
-
property
ST_zero_flux
¶ ST flux zero point in erg/s/cm2/AA
-
property
ST_zero_mag
¶ ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1
-
property
Vega_zero_Jy
¶ Vega flux zero point in Jansky (Jy)
-
property
Vega_zero_flux
¶ Vega flux zero point in erg/s/cm2/AA
-
property
Vega_zero_mag
¶ vega magnitude zero point vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) vegamag = -2.5 * log10(f_lamb) - zpts
-
property
Vega_zero_photons
¶ Vega number of photons per wavelength unit
Note
see self.get_Nphotons
-
apply_transmission
(slamb, sflux)[source]¶ Apply filter transmission to a spectrum (with reinterpolation of the filter)
- slamb: ndarray
spectrum wavelength definition domain
- sflux: ndarray
associated flux
- flux: float
new spectrum values accounting for the filter
-
property
cl
¶ Unitwise wavelength definition
-
property
fwhm
¶ the difference between the two wavelengths for which filter transmission is half maximum
- ..note::
This calculation is not exact but rounded to the nearest passband data points
-
getFlux
(slamb, sflux, axis=-1)[source]¶ Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux
- flux: float
Energy of the spectrum within the filter
-
get_Nphotons
(slamb, sflux, axis=-1)[source]¶ getNphot the number of photons through the filter (Ntot / width in the documentation)
getflux() * leff / hc
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux in erg/s/cm2/AA
- N: float
Number of photons of the spectrum within the filter
-
get_flux
(slamb, sflux, axis=-1)[source]¶ getFlux Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux
- flux: float
Energy of the spectrum within the filter
-
property
leff
¶ Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)
-
property
lmax
¶ Calculated as the last value with a transmission at least 1% of maximum transmission
-
property
lmin
¶ Calculate das the first value with a transmission at least 1% of maximum transmission
-
property
lphot
¶ Photon distribution based effective wavelength. Defined as
lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)
which we calculate as
lphot = get_flux(lamb * vega) / get_flux(vega)
-
property
lpivot
¶ Unitwise wavelength definition
-
classmethod
make_integration_filter
(lmin, lmax, name='', dtype='photon', unit=None)[source]¶ Generate an heavyside filter between lmin and lmax
-
to_Table
(**kwargs)[source]¶ Export filter to a SimpleTable object
- fname: str
filename
Uses SimpleTable parameters
-
property
wavelength
¶ Unitwise wavelength definition
-
property
width
¶ Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.
W = int(T dlamb) / max(T)
-
class
pyphot.phot.
HDF_Library
(source='/github/workspace/pyphot/libs/new_filters.hd5', mode='r')[source]¶ Bases:
pyphot.phot.Library
Storage based on HDF
-
add_filter
(f, **kwargs)[source]¶ Add a filter to the library permanently
- f: Filter object
filter to add
-
load_all_filters
(interp=True, lamb=None)[source]¶ load all filters from the library
- interp: bool
reinterpolate the filters over given lambda points
- lamb: ndarray[float, ndim=1]
desired wavelength definition of the filter
- filters: list[filter]
list of filter objects
-
load_filters
(names, interp=True, lamb=None, filterLib=None)[source]¶ load a limited set of filters
- names: list[str]
normalized names according to filtersLib
- interp: bool
reinterpolate the filters over given lambda points
- lamb: ndarray[float, ndim=1]
desired wavelength definition of the filter
- filterLib: path
path to the filter library hd5 file
- filters: list[filter]
list of filter objects
-
-
class
pyphot.phot.
Library
(source='/github/workspace/pyphot/libs/new_filters.hd5', *args, **kwargs)[source]¶ Bases:
object
Common grounds for filter libraries
-
property
content
¶ Get the content list
-
property
-
class
pyphot.phot.
UncertainFilter
(wavelength, mean_transmit, samples, name='', dtype='photon', unit=None)[source]¶ Bases:
pyphot.phot.Filter
What could be a filter with uncertainties
- wavelength: ndarray
wavelength sequence defining the filter transmission curve
- mean_: Filter
mean passband transmission
- samples_: sequence(Filter)
samples from the uncertain passband transmission model
- name: string
name of the passband
- dtype: str
detector type, either “photon” or “energy” counter
- unit: str
wavelength units
-
property
AB_zero_Jy
¶ AB flux zero point in Jansky (Jy)
-
property
AB_zero_flux
¶ AB flux zero point in erg/s/cm2/AA
-
property
AB_zero_mag
¶ AB magnitude zero point ABmag = -2.5 * log10(f_nu) - 48.60
= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts
-
property
ST_zero_Jy
¶ ST flux zero point in Jansky (Jy)
-
property
ST_zero_flux
¶ ST flux zero point in erg/s/cm2/AA
-
property
ST_zero_mag
¶ ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1
-
property
Vega_zero_Jy
¶ Vega flux zero point in Jansky (Jy)
-
property
Vega_zero_flux
¶ Vega flux zero point in erg/s/cm2/AA
-
property
Vega_zero_mag
¶ Vega magnitude zero point Vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) Vegamag = -2.5 * log10(f_lamb) - zpts
-
property
Vega_zero_photons
¶ Vega number of photons per wavelength unit
Note
see self.get_Nphotons
-
apply_transmission
(slamb, sflux)[source]¶ Apply filter transmission to a spectrum (with reinterpolation of the filter)
- slamb: ndarray
spectrum wavelength definition domain
- sflux: ndarray
associated flux
- flux: float
new spectrum values accounting for the filter
-
property
cl
¶ Unitwise wavelength definition
-
classmethod
from_gp_model
(model, xprime=None, n_samples=10, **kwargs)[source]¶ Generate a filter object from a sklearn GP model
- model: sklearn.gaussian_process.GaussianProcessRegressor
model of the passband
- xprime: ndarray
wavelength to express the model in addition to the training points
- n_samples: int
number of samples to generate from the model.
- **kwawrgs: dict
UncertainFilter keywords
-
property
fwhm
¶ the difference between the two wavelengths for which filter transmission is half maximum
- ..note::
This calculation is not exact but rounded to the nearest passband data points
-
getFlux
(slamb, sflux, axis=-1)[source]¶ Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux
- flux: float
Energy of the spectrum within the filter
-
get_Nphotons
(slamb, sflux, axis=-1)[source]¶ getNphot the number of photons through the filter (Ntot / width in the documentation)
getflux() * leff / hc
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux in erg/s/cm2/AA
- N: float
Number of photons of the spectrum within the filter
-
property
leff
¶ Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)
-
property
lmax
¶ Calculated as the last value with a transmission at least 1% of maximum transmission
-
property
lmin
¶ Calculate das the first value with a transmission at least 1% of maximum transmission
-
property
lphot
¶ Photon distribution based effective wavelength. Defined as
lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)
which we calculate as
lphot = get_flux(lamb * vega) / get_flux(vega)
-
property
lpivot
¶ Unitwise wavelength definition
-
to_Table
(**kwargs)[source]¶ Export filter to a SimpleTable object
- fname: str
filename
Uses SimpleTable parameters
-
property
transmit
¶ Transmission curves
-
property
wavelength
¶ Unitwise wavelength definition
-
property
wavelength_unit
¶ Unit wavelength definition
-
property
width
¶ Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.
W = int(T dlamb) / max(T)
-
pyphot.phot.
get_library
(fname='/github/workspace/pyphot/libs/new_filters.hd5', **kwargs)[source]¶ Finds the appropriate class to load the library
pyphot.sandbox module¶
Sandbox of new developments
Use at your own risks
Photometric package using Astropy Units¶
Defines a Filter class and associated functions to extract photometry.
This also include functions to keep libraries up to date
Note
integrations are done using trapezoid()
Why not Simpsons? Simpsons principle is to take sequence of 3 points to
make a quadratic interpolation. Which in the end, when filters have sharp
edges, the error due to this “interpolation” are extremely large in
comparison to the uncertainties induced by trapeze integration.
-
class
pyphot.sandbox.
Constants
[source]¶ Bases:
object
A namespace for constants
-
c
= <Quantity(2.99792458e+18, 'angstrom / second')>¶
-
h
= <Quantity(6.62607554e-27, 'erg * second')>¶
-
-
class
pyphot.sandbox.
UncertainFilter
(wavelength, mean_transmit, samples, name='', dtype='photon', unit=None)[source]¶ Bases:
pyphot.sandbox.UnitFilter
What could be a filter with uncertainties
- wavelength: ndarray
wavelength sequence defining the filter transmission curve
- mean_: Filter
mean passband transmission
- samples_: sequence(Filter)
samples from the uncertain passband transmission model
- name: string
name of the passband
- dtype: str
detector type, either “photon” or “energy” counter
- unit: str
wavelength units
-
property
AB_zero_Jy
¶ AB flux zero point in Jansky (Jy)
-
property
AB_zero_flux
¶ AB flux zero point in erg/s/cm2/AA
-
property
AB_zero_mag
¶ AB magnitude zero point ABmag = -2.5 * log10(f_nu) - 48.60
= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts
-
property
ST_zero_Jy
¶ ST flux zero point in Jansky (Jy)
-
property
ST_zero_flux
¶ ST flux zero point in erg/s/cm2/AA
-
property
ST_zero_mag
¶ ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1
-
property
Vega_zero_Jy
¶ Vega flux zero point in Jansky (Jy)
-
property
Vega_zero_flux
¶ Vega flux zero point in erg/s/cm2/AA
-
property
Vega_zero_mag
¶ Vega magnitude zero point Vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) Vegamag = -2.5 * log10(f_lamb) - zpts
-
property
Vega_zero_photons
¶ Vega number of photons per wavelength unit
Note
see self.get_Nphotons
-
apply_transmission
(slamb, sflux)[source]¶ Apply filter transmission to a spectrum (with reinterpolation of the filter)
- slamb: ndarray
spectrum wavelength definition domain
- sflux: ndarray
associated flux
- flux: float
new spectrum values accounting for the filter
-
property
cl
¶ Unitwise wavelength definition
-
classmethod
from_gp_model
(model, xprime=None, n_samples=10, **kwargs)[source]¶ Generate a filter object from a sklearn GP model
- model: sklearn.gaussian_process.GaussianProcessRegressor
model of the passband
- xprime: ndarray
wavelength to express the model in addition to the training points
- n_samples: int
number of samples to generate from the model.
- **kwawrgs: dict
UncertainFilter keywords
-
property
fwhm
¶ the difference between the two wavelengths for which filter transmission is half maximum
- ..note::
This calculation is not exact but rounded to the nearest passband data points
-
getFlux
(slamb, sflux, axis=-1)[source]¶ Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux
- flux: float
Energy of the spectrum within the filter
-
get_Nphotons
(slamb, sflux, axis=-1)[source]¶ getNphot the number of photons through the filter (Ntot / width in the documentation)
getflux() * leff / hc
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux in erg/s/cm2/AA
- N: float
Number of photons of the spectrum within the filter
-
property
leff
¶ Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)
-
property
lmax
¶ Calculated as the last value with a transmission at least 1% of maximum transmission
-
property
lmin
¶ Calculate das the first value with a transmission at least 1% of maximum transmission
-
property
lphot
¶ Photon distribution based effective wavelength. Defined as
lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)
which we calculate as
lphot = get_flux(lamb * vega) / get_flux(vega)
-
property
lpivot
¶ Unitwise wavelength definition
-
to_Table
(**kwargs)[source]¶ Export filter to a SimpleTable object
- fname: str
filename
Uses SimpleTable parameters
-
property
transmit
¶ Transmission curves
-
property
wavelength
¶ Unitwise wavelength definition
-
property
wavelength_unit
¶ Unit wavelength definition
-
property
width
¶ Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.
W = int(T dlamb) / max(T)
-
class
pyphot.sandbox.
UnitAscii_Library
(source)[source]¶ Bases:
pyphot.sandbox.UnitLibrary
Interface one or multiple directory or many files as a filter library
>>> lib = Ascii_Library(['ground', 'hst', 'myfilter.csv'])
-
add_filters
(filter_object, fmt='%.6f', **kwargs)[source]¶ Add a filter to the library permanently
- filter_object: Filter object
filter to add
-
load_filters
(names, interp=True, lamb=None, filterLib=None)[source]¶ load a limited set of filters
- names: list[str]
normalized names according to filtersLib
- interp: bool
reinterpolate the filters over given lambda points
- lamb: ndarray[float, ndim=1]
desired wavelength definition of the filter
- filterLib: path
path to the filter library hd5 file
- filters: list[filter]
list of filter objects
-
-
class
pyphot.sandbox.
UnitFilter
(wavelength, transmit, name='', dtype='photon', unit=None)[source]¶ Bases:
object
Evolution of Filter that makes sure the input spectra and output fluxes have units to avoid mis-interpretation.
- Note the usual (non SI) units of flux definitions:
flam = erg/s/cm**2/AA fnu = erg/s/cm**2/Hz photflam = photon/s/cm**2/AA photnu = photon/s/cm**2/Hz
Define a filter by its name, wavelength and transmission The type of detector (energy or photon counter) can be specified for adapting calculations. (default: photon)
- name: str
name of the filter
- cl: float
central wavelength of the filter
- norm: float
normalization factor of the filter
- lpivot: float
pivot wavelength of the filter
- wavelength: ndarray
wavelength sequence defining the filter transmission curve
- transmit: ndarray
transmission curve of the filter
- dtype: str
detector type, either “photon” or “energy” counter
- unit: str
wavelength units
-
property
AB_zero_Jy
¶ AB flux zero point in Jansky (Jy)
-
property
AB_zero_flux
¶ AB flux zero point in erg/s/cm2/AA
-
property
AB_zero_mag
¶ AB magnitude zero point ABmag = -2.5 * log10(f_nu) - 48.60
= -2.5 * log10(f_lamb) - 2.5 * log10(lpivot ** 2 / c) - 48.60 = -2.5 * log10(f_lamb) - zpts
-
property
ST_zero_Jy
¶ ST flux zero point in Jansky (Jy)
-
property
ST_zero_flux
¶ ST flux zero point in erg/s/cm2/AA
-
property
ST_zero_mag
¶ ST magnitude zero point STmag = -2.5 * log10(f_lamb) -21.1
-
property
Vega_zero_Jy
¶ Vega flux zero point in Jansky (Jy)
-
property
Vega_zero_flux
¶ Vega flux zero point in erg/s/cm2/AA
-
property
Vega_zero_mag
¶ vega magnitude zero point vegamag = -2.5 * log10(f_lamb) + 2.5 * log10(f_vega) vegamag = -2.5 * log10(f_lamb) - zpts
-
property
Vega_zero_photons
¶ Vega number of photons per wavelength unit
Note
see self.get_Nphotons
-
apply_transmission
(slamb, sflux)[source]¶ Apply filter transmission to a spectrum (with reinterpolation of the filter)
- slamb: ndarray
spectrum wavelength definition domain
- sflux: ndarray
associated flux
- flux: float
new spectrum values accounting for the filter
-
property
cl
¶ Unitwise wavelength definition
-
property
fwhm
¶ the difference between the two wavelengths for which filter transmission is half maximum
- ..note::
This calculation is not exact but rounded to the nearest passband data points
-
getFlux
(slamb, sflux, axis=-1)[source]¶ Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux
- flux: float
Energy of the spectrum within the filter
-
get_Nphotons
(slamb, sflux, axis=-1)[source]¶ getNphot the number of photons through the filter (Ntot / width in the documentation)
getflux() * leff / hc
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux in erg/s/cm2/AA
- N: float
Number of photons of the spectrum within the filter
-
get_flux
(slamb, sflux, axis=-1)[source]¶ getFlux Integrate the flux within the filter and return the integrated energy If you consider applying the filter to many spectra, you might want to consider extractSEDs.
- slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
- sflux: ndarray(dtype=float, ndim=1)
associated flux
- flux: float
Energy of the spectrum within the filter
-
property
leff
¶ Unitwise Effective wavelength leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)
-
property
lmax
¶ Calculated as the last value with a transmission at least 1% of maximum transmission
-
property
lmin
¶ Calculate das the first value with a transmission at least 1% of maximum transmission
-
property
lphot
¶ Photon distribution based effective wavelength. Defined as
lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)
which we calculate as
lphot = get_flux(lamb * vega) / get_flux(vega)
-
property
lpivot
¶ Unitwise wavelength definition
-
classmethod
make_integration_filter
(lmin, lmax, name='', dtype='photon', unit=None)[source]¶ Generate an heavyside filter between lmin and lmax
-
to_Table
(**kwargs)[source]¶ Export filter to a SimpleTable object
- fname: str
filename
Uses SimpleTable parameters
-
property
wavelength
¶ Unitwise wavelength definition
-
property
width
¶ Effective width Equivalent to the horizontal size of a rectangle with height equal to maximum transmission and with the same area that the one covered by the filter transmission curve.
W = int(T dlamb) / max(T)
-
class
pyphot.sandbox.
UnitHDF_Library
(source='/github/workspace/pyphot/libs/new_filters.hd5', mode='r')[source]¶ Bases:
pyphot.sandbox.UnitLibrary
Storage based on HDF
-
add_filter
(f, **kwargs)[source]¶ Add a filter to the library permanently
- f: Filter object
filter to add
-
load_all_filters
(interp=True, lamb=None)[source]¶ load all filters from the library
- interp: bool
reinterpolate the filters over given lambda points
- lamb: ndarray[float, ndim=1]
desired wavelength definition of the filter
- filters: list[filter]
list of filter objects
-
load_filters
(names, interp=True, lamb=None, filterLib=None)[source]¶ load a limited set of filters
- names: list[str]
normalized names according to filtersLib
- interp: bool
reinterpolate the filters over given lambda points
- lamb: ndarray[float, ndim=1]
desired wavelength definition of the filter
- filterLib: path
path to the filter library hd5 file
- filters: list[filter]
list of filter objects
-
-
class
pyphot.sandbox.
UnitLibrary
(source='/github/workspace/pyphot/libs/new_filters.hd5', *args, **kwargs)[source]¶ Bases:
object
Common grounds for filter libraries
-
property
content
¶ Get the content list
-
property
-
class
pyphot.sandbox.
UnitLickIndex
(name, lick, unit='AA')[source]¶ Bases:
pyphot.licks.LickIndex
Define a Lick Index similarily to a Filter object
-
get
(wave, flux, **kwargs)[source]¶ compute spectral index after continuum subtraction
- w: ndarray (nw, )
array of wavelengths in AA
- flux: ndarray (N, nw)
array of flux values for different spectra in the series
- degree: int (default 1)
degree of the polynomial fit to the continuum
- nocheck: bool
set to silently pass on spectral domain mismatch. otherwise raises an error when index is not covered
- ew: ndarray (N,)
equivalent width or magnitude array
ValueError: when the spectral coverage wave does not cover the index range
-
-
class
pyphot.sandbox.
UnitLickLibrary
(fname='/github/workspace/pyphot/libs/licks.dat', comment='#')[source]¶ Bases:
pyphot.licks.LickLibrary
Collection of Lick indices
-
property
content
¶
-
property
description
¶ any comment in the input file
-
property
-
pyphot.sandbox.
get_library
(fname='/github/workspace/pyphot/libs/new_filters.hd5', **kwargs)[source]¶ Finds the appropriate class to load the library
-
pyphot.sandbox.
reduce_resolution
(wi, fi, fwhm0=<Quantity(0.55, 'angstrom')>, sigma_floor=<Quantity(0.2, 'angstrom')>)[source]¶ Adapt the resolution of the spectra to match the lick definitions
Lick definitions have different resolution elements as function of wavelength. These definition are hard-coded in this function
- wi: ndarray (n, )
wavelength definition
- fi: ndarray (nspec, n) or (n, )
spectra to convert
- fwhm0: float
initial broadening in the spectra fi
- sigma_floor: float
minimal dispersion to consider
- flux_red: ndarray (nspec, n) or (n, )
reduced spectra
pyphot.simpletable module¶
- This file implements a Table class
that is designed to be the basis of any format
Requirements¶
- FIT format:
- astropy:
provides a replacement to pyfits pyfits can still be used instead but astropy is now the default
- HDF5 format:
pytables
RuntimeError will be raised when writing to a format associated with missing package.
-
class
pyphot.simpletable.
AstroHelpers
[source]¶ Bases:
object
Helpers related to astronomy data
-
static
conesearch
(ra0, dec0, ra, dec, r, outtype=0)[source]¶ Perform a cone search on a table
- ra0: ndarray[ndim=1, dtype=float]
column name to use as RA source in degrees
- dec0: ndarray[ndim=1, dtype=float]
column name to use as DEC source in degrees
- ra: float
ra to look for (in degree)
- dec: float
ra to look for (in degree)
- r: float
distance in degrees
- outtype: int
- type of outputs
0 – minimal, indices of matching coordinates 1 – indices and distances of matching coordinates 2 – full, boolean filter and distances
- t: tuple
- if outtype is 0:
only return indices from ra0, dec0
- elif outtype is 1:
return indices from ra0, dec0 and distances
- elif outtype is 2:
return conditional vector and distance to all ra0, dec0
-
static
deg2dms
(val, delim=':')[source]¶ Convert degrees into hex coordinates
- deg: float
angle in degrees
- delimiter: str
character delimiting the fields
- str: string or sequence
string to convert
-
static
deg2hms
(val, delim=':')[source]¶ Convert degrees into hex coordinates
- deg: float
angle in degrees
- delimiter: str
character delimiting the fields
- str: string or sequence
string to convert
-
static
dms2deg
(_str, delim=':')[source]¶ Convert hex coordinates into degrees
- str: string or sequence
string to convert
- delimiter: str
character delimiting the fields
- deg: float
angle in degrees
-
static
euler
(ai_in, bi_in, select, b1950=False, dtype='f8')[source]¶ Transform between Galactic, celestial, and ecliptic coordinates. Celestial coordinates (RA, Dec) should be given in equinox J2000 unless the b1950 is True.
select
From
To
select
From
To
1
RA-Dec (2000)
Galactic
4
Ecliptic
RA-Dec
2
Galactic
RA-DEC
5
Ecliptic
Galactic
3
RA-Dec
Ecliptic
6
Galactic
Ecliptic
- long_in: float, or sequence
Input Longitude in DEGREES, scalar or vector.
- lat_in: float, or sequence
Latitude in DEGREES
- select: int
Integer from 1 to 6 specifying type of coordinate transformation.
- b1950: bool
set equinox set to 1950
- long_out: float, seq
Output Longitude in DEGREES
- lat_out: float, seq
Output Latitude in DEGREES
Note
Written W. Landsman, February 1987 Adapted from Fortran by Daryl Yentis NRL Converted to IDL V5.0 W. Landsman September 1997 Made J2000 the default, added /FK4 keyword W. Landsman December 1998 Add option to specify SELECT as a keyword W. Landsman March 2003 Converted from IDL to numerical Python: Erin Sheldon, NYU, 2008-07-02
-
static
hms2deg
(_str, delim=':')[source]¶ Convert hex coordinates into degrees
- str: string or sequence
string to convert
- delimiter: str
character delimiting the fields
- deg: float
angle in degrees
-
static
sphdist
(ra1, dec1, ra2, dec2)[source]¶ measures the spherical distance between 2 points
- ra1: float or sequence
first right ascensions in degrees
- dec1: float or sequence
first declination in degrees
- ra2: float or sequence
second right ascensions in degrees
- dec2: float or sequence
first declination in degrees
- Outputs: float or sequence
returns a distance in degrees
-
static
-
class
pyphot.simpletable.
AstroTable
(*args, **kwargs)[source]¶ Bases:
pyphot.simpletable.SimpleTable
Derived from the Table, this class add implementations of common astro tools especially conesearch
-
coneSearch
(ra, dec, r, outtype=0)[source]¶ Perform a cone search on a table
- ra0: ndarray[ndim=1, dtype=float]
column name to use as RA source in degrees
- dec0: ndarray[ndim=1, dtype=float]
column name to use as DEC source in degrees
- ra: float
ra to look for (in degree)
- dec: float
ra to look for (in degree)
- r: float
distance in degrees
- outtype: int
- type of outputs
0 – minimal, indices of matching coordinates 1 – indices and distances of matching coordinates 2 – full, boolean filter and distances
- t: tuple
- if outtype is 0:
only return indices from ra0, dec0
- elif outtype is 1:
return indices from ra0, dec0 and distances
- elif outtype is 2:
return conditional vector and distance to all ra0, dec0
-
selectWhere
(fields, condition=None, condvars=None, cone=None, zone=None, **kwargs)[source]¶ Read table data fulfilling the given condition. Only the rows fulfilling the condition are included in the result. conesearch is also possible through the keyword cone formatted as (ra, dec, r) zonesearch is also possible through the keyword zone formatted as (ramin, ramax, decmin, decmax)
Combination of multiple selections is also available.
-
where
(condition=None, condvars=None, cone=None, zone=None, **kwargs)[source]¶ Read table data fulfilling the given condition. Only the rows fulfilling the condition are included in the result.
- condition: str
expression to evaluate on the table includes mathematical operations and attribute names
- condvars: dictionary, optional
A dictionary that replaces the local operands in current frame.
out: ndarray/ tuple of ndarrays result equivalent to
np.where()
-
zoneSearch
(ramin, ramax, decmin, decmax, outtype=0)[source]¶ Perform a zone search on a table, i.e., a rectangular selection
- ramin: float
minimal value of RA
- ramax: float
maximal value of RA
- decmin: float
minimal value of DEC
- decmax: float
maximal value of DEC
- outtype: int
- type of outputs
0 or 1 – minimal, indices of matching coordinates 2 – full, boolean filter and distances
- r: sequence
indices or conditional sequence of matching values
-
-
class
pyphot.simpletable.
SimpleTable
(fname, *args, **kwargs)[source]¶ Bases:
object
Table class that is designed to be the basis of any format wrapping around numpy recarrays
- fname: str or object
if str, the file to read from. This may be limited to the format currently handled automatically. If the format is not correctly handled, you can try by providing an object.__
- if object with a structure like dict, ndarray, or recarray-like
the data will be encapsulated into a Table
- caseless: bool
if set, column names will be caseless during operations
- aliases: dict
set of column aliases (can be defined later
set_alias()
)- units: dict
set of column units (can be defined later
set_unit()
)- desc: dict
set of column description or comments (can be defined later
set_comment()
)- header: dict
key, value pair corresponding to the attributes of the table
-
property
Plotter
¶ Plotter instance related to this dataset. Requires plotter add-on to work
-
addCol
(name, data, dtype=None, unit=None, description=None)¶ Add one or multiple columns to the table
- name: str or sequence(str)
The name(s) of the column(s) to add
- data: ndarray, or sequence of ndarray
The column data, or sequence of columns
- dtype: dtype
numpy dtype for the data to add
- unit: str
The unit of the values in the column
- description: str
A description of the content of the column
-
add_column
(name, data, dtype=None, unit=None, description=None)[source]¶ Add one or multiple columns to the table
- name: str or sequence(str)
The name(s) of the column(s) to add
- data: ndarray, or sequence of ndarray
The column data, or sequence of columns
- dtype: dtype
numpy dtype for the data to add
- unit: str
The unit of the values in the column
- description: str
A description of the content of the column
-
append_row
(iterable)[source]¶ Append one row in this table.
see also:
stack()
- iterable: iterable
line to add
-
property
colnames
¶ Sequence of column names
-
compress
(condition, axis=None, out=None)[source]¶ Return selected slices of an array along given axis.
When working along a given axis, a slice along that axis is returned in output for each index where condition evaluates to True. When working on a 1-D array, compress is equivalent to extract.
- condition1-D array of bools
Array that selects which entries to return. If len(condition) is less than the size of a along the given axis, then output is truncated to the length of the condition array.
- axisint, optional
Axis along which to take slices. If None (default), work on the flattened array.
- outndarray, optional
Output array. Its type is preserved and it must be of the right shape to hold the output.
- compressed_arrayndarray
A copy of a without the slices along axis for which condition is false.
-
delCol
(names)¶ Remove several columns from the table
- names: sequence
A list containing the names of the columns to remove
-
property
dtype
¶ dtype of the data
-
property
empty_row
¶ Return an empty row array respecting the table format
-
evalexpr
(expr, exprvars=None, dtype=<class 'float'>)[source]¶ - evaluate expression based on the data and external variables
all np function can be used (log, exp, pi…)
- expr: str
expression to evaluate on the table includes mathematical operations and attribute names
- exprvars: dictionary, optional
A dictionary that replaces the local operands in current frame.
- dtype: dtype definition
dtype of the output array
- outNumPy array
array of the result
-
find_duplicate
(index_only=False, values_only=False)[source]¶ Find duplication in the table entries, return a list of duplicated elements Only works at this time is 2 lines are the same entry not if 2 lines have the same values
-
get
(v, full_match=False)[source]¶ returns a table from columns given as v
this function is equivalent to
__getitem__()
but preserve the Table format and associated properties (units, description, header)- v: str
pattern to filter the keys with
- full_match: bool
if set, use
re.fullmatch()
instead ofre.match()
-
groupby
(*key)[source]¶ Create an iterator which returns (key, sub-table) grouped by each value of key(value)
- key: str
expression or pattern to filter the keys with
- key: str or sequence
group key
- tab: SimpleTable instance
sub-table of the group header, aliases and column metadata are preserved (linked to the master table).
-
join_by
(r2, key, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, asrecarray=False, asTable=True)[source]¶ Join arrays r1 and r2 on key key.
The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm.
- key: str or seq(str)
corresponding to the fields used for comparison.
- r2: Table
Table to join with
- jointype: str in {‘inner’, ‘outer’, ‘leftouter’}
‘inner’ : returns the elements common to both r1 and r2.
‘outer’ : returns the common elements as well as the elements of r1 not in r2 and the elements of not in r2.
‘leftouter’ : returns the common elements and the elements of r1 not in r2.
- r1postfix: str
String appended to the names of the fields of r1 that are present in r2
- r2postfix: str
String appended to the names of the fields of r2 that are present in r1
- defaults: dict
Dictionary mapping field names to the corresponding default values.
- tab: Table
joined table
Note
The output is sorted along the key.
A temporary array is formed by dropping the fields not in the key for the two arrays and concatenating the result. This array is then sorted, and the common entries selected. The output is constructed by filling the fields with the selected entries. Matching is not preserved if there are some duplicates…
-
keys
(regexp=None, full_match=False)[source]¶ Return the data column names or a subset of it
- regexp: str
pattern to filter the keys with
- full_match: bool
if set, use
re.fullmatch()
instead ofre.match()
Try to apply the pattern at the start of the string, returning a match object, or None if no match was found.
- seq: sequence
sequence of keys
-
match
(r2, key)[source]¶ Returns the indices at which the tables match matching uses 2 columns that are compared in values
- r2: Table
second table to use
- key: str
fields used for comparison.
- indexes: tuple
tuple of both indices list where the two columns match.
-
property
name
¶ name of the table given by the Header[‘NAME’] attribute
-
property
nbytes
¶ number of bytes of the object
-
property
ncols
¶ number of columns
-
property
nrows
¶ number of lines
-
pop_columns
(names)[source]¶ Pop several columns from the table
- names: sequence
A list containing the names of the columns to remove
- values: tuple
list of columns
-
pprint
(idx=None, fields=None, ret=False, all=False, full_match=False, headerChar='-', delim=' | ', endline='\n', **kwargs)[source]¶ - Pretty print the table content
you can select the table parts to display using idx to select the rows and fields to only display some columns (ret is only for insternal use)
- idx: sequence, slide
sub selection to print
- fields: str, sequence
if str can be a regular expression, and/or list of fields separated by spaces or commas
- ret: bool
if set return the string representation instead of printing the result
- all: bool
if set, force to show all rows
- headerChar: char
Character to be used for the row separator line
- delim: char
The column delimiter.
-
pprint_entry
(num, keys=None)[source]¶ print one line with key and values properly to be readable
- num: int, slice
indice selection
- keys: sequence or str
if str, can be a regular expression if sequence, the sequence of keys to print
-
remove_column
(names)¶ Remove several columns from the table
- names: sequence
A list containing the names of the columns to remove
-
remove_columns
(names)[source]¶ Remove several columns from the table
- names: sequence
A list containing the names of the columns to remove
-
resolve_alias
(colname)[source]¶ Return the name of an aliased column.
Given an alias, return the column name it aliases. This function is a no-op if the alias is a column name itself.
Aliases are defined by using .define_alias()
-
reverse_alias
(colname)[source]¶ Return aliases of a given column.
Given a colname, return a sequence of aliases associated to this column Aliases are defined by using .define_alias()
-
select
(fields, indices=None, **kwargs)[source]¶ Select only a few fields in the table
- fields: str or sequence
fields to keep in the resulting table
- indices: sequence or slice
extract only on these indices
- tab: SimpleTable instance
resulting table
-
selectWhere
(fields, condition, condvars=None, **kwargs)[source]¶ - Read table data fulfilling the given condition.
Only the rows fulfilling the condition are included in the result.
- fields: str or sequence
fields to keep in the resulting table
- condition: str
expression to evaluate on the table includes mathematical operations and attribute names
- condvars: dictionary, optional
A dictionary that replaces the local operands in current frame.
- tab: SimpleTable instance
resulting table
-
setComment
(colname, comment)¶ Set the comment of a column referenced by its name
- colname: str
column name or registered alias
- comment: str
column description
-
setUnit
(colname, unit)¶ Set the unit of a column referenced by its name
- colname: str
column name or registered alias
- unit: str
unit description
-
set_alias
(alias, colname)[source]¶ Define an alias to a column
- alias: str
The new alias of the column
- colname: str
The column being aliased
-
set_comment
(colname, comment)[source]¶ Set the comment of a column referenced by its name
- colname: str
column name or registered alias
- comment: str
column description
-
set_unit
(colname, unit)[source]¶ Set the unit of a column referenced by its name
- colname: str
column name or registered alias
- unit: str
unit description
-
property
shape
¶ shape of the data
-
sort
(keys, copy=False)[source]¶ Sort the table inplace according to one or more keys. This operates on the existing table (and does not return a new table).
- keys: str or seq(str)
The key(s) to order by
- copy: bool
if set returns a sorted copy instead of working inplace
-
stack
(r, *args, **kwargs)[source]¶ Superposes arrays fields by fields inplace
t.stack(t1, t2, t3, default=None, inplace=True)
r: Table
-
stats
(fn=None, fields=None, fill=None)[source]¶ Make statistics on columns of a table
- fn: callable or sequence of callables
functions to apply to each column default: (np.mean, np.std, np.nanmin, np.nanmax)
- fields: str or sequence
any key or key expression to subselect columns default is all columns
- fill: value
value when not applicable default np.nan
- tab: Table instance
collection of statistics, one column per function in fn and 1 ligne per column in the table
-
take
(indices, axis=None, out=None, mode='raise')[source]¶ Take elements from an array along an axis.
This function does the same thing as “fancy” indexing (indexing arrays using arrays); however, it can be easier to use if you need elements along a given axis.
- indicesarray_like
The indices of the values to extract. Also allow scalars for indices.
- axisint, optional
The axis over which to select values. By default, the flattened input array is used.
- outndarray, optional
If provided, the result will be placed in this array. It should be of the appropriate shape and dtype.
- mode{‘raise’, ‘wrap’, ‘clip’}, optional
Specifies how out-of-bounds indices will behave.
‘raise’ – raise an error (default)
‘wrap’ – wrap around
‘clip’ – clip to the range
‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.
- subarrayndarray
The returned array has the same type as a.
-
to_astropy_table
(**kwargs)[source]¶ A class to represent tables of heterogeneous data.
astropy.table.Table provides a class for heterogeneous tabular data, making use of a numpy structured array internally to store the data values. A key enhancement provided by the Table class is the ability to easily modify the structure of the table by adding or removing columns, or adding new rows of data. In addition table and column metadata are fully supported.
- maskedbool, optional
Specify whether the table is masked.
- nameslist, optional
Specify column names
- dtypelist, optional
Specify column data types
- metadict, optional
Metadata associated with the table.
- copybool, optional
Copy the input data (default=True).
- rowsnumpy ndarray, list of lists, optional
Row-oriented data for table instead of
data
argument- copy_indicesbool, optional
Copy any indices in the input data (default=True)
- **kwargsdict, optional
Additional keyword args when converting table-like object
- df: astropy.table.Table
dataframe
-
to_dask
(**kwargs)[source]¶ Construct a Dask DataFrame
This splits an in-memory Pandas dataframe into several parts and constructs a dask.dataframe from those parts on which Dask.dataframe can operate in parallel.
Note that, despite parallelism, Dask.dataframe may not always be faster than Pandas. We recommend that you stay with Pandas for as long as possible before switching to Dask.dataframe.
- keys: sequence, optional
ordered subset of columns to export
- npartitionsint, optional
The number of partitions of the index to create. Note that depending on the size and index of the dataframe, the output may have fewer partitions than requested.
- chunksizeint, optional
The size of the partitions of the index.
- sort: bool
Sort input first to obtain cleanly divided partitions or don’t sort and don’t get cleanly divided partitions
- name: string, optional
An optional keyname for the dataframe. Defaults to hashing the input
- dask.DataFrame or dask.Series
A dask DataFrame/Series partitioned along the index
-
to_dict
(keys=None, contiguous=False)[source]¶ Construct a dictionary from this dataframe with contiguous arrays
- keys: sequence, optional
ordered subset of columns to export
- contiguous: boolean
make sure each value is a contiguous numpy array object (C-aligned)
- data: dict
converted data
-
to_pandas
(**kwargs)[source]¶ Construct a pandas dataframe
- datandarray
(structured dtype), list of tuples, dict, or DataFrame
- keys: sequence, optional
ordered subset of columns to export
- indexstring, list of fields, array-like
Field of array to use as the index, alternately a specific set of input labels to use
- excludesequence, default None
Columns or fields to exclude
- columnssequence, default None
Column names to use. If the passed data do not have names associated with them, this argument provides names for the columns. Otherwise this argument indicates the order of the columns in the result (any names not found in the data will become all-NA columns)
- coerce_floatboolean, default False
Attempt to convert values to non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets
df : DataFrame
-
to_vaex
(**kwargs)[source]¶ Create an in memory Vaex dataset
- name: str
unique for the dataset
- keys: sequence, optional
ordered subset of columns to export
- df: vaex.DataSetArrays
vaex dataset
-
to_xarray
(**kwargs)[source]¶ Construct an xarray dataset
Each column will be converted into an independent variable in the Dataset. If the dataframe’s index is a MultiIndex, it will be expanded into a tensor product of one-dimensional indices (filling in missing values with NaN). This method will produce a Dataset very similar to that on which the ‘to_dataframe’ method was called, except with possibly redundant dimensions (since all dataset variables will have the same dimensionality).
-
where
(condition, condvars=None, *args, **kwargs)[source]¶ Read table data fulfilling the given condition. Only the rows fulfilling the condition are included in the result.
- condition: str
expression to evaluate on the table includes mathematical operations and attribute names
- condvars: dictionary, optional
A dictionary that replaces the local operands in current frame.
out: ndarray/ tuple of ndarrays result equivalent to
np.where()
pyphot.sun module¶
Handle the Sun Spectrum
-
class
pyphot.sun.
Sun
(source=None, distance=<Quantity(1, 'astronomical_unit')>, flavor='theoretical')[source]¶ Bases:
object
Class that handles the Sun’s spectrum and references.
Observed solar spectrum comes from: ftp://ftp.stsci.edu/cdbs/current_calspec/sun_reference_stis_001.fits
and theoretical spectrum comes from: ftp://ftp.stsci.edu/cdbs/grid/k93models/standards/sun_kurucz93.fits
The theoretical spectrum is scaled to match the observed spectrum from 1.5 - 2.5 microns, and then it is used where the observed spectrum ends. The theoretical model of the Sun from Kurucz’93 atlas using the following parameters when the Sun is at 1 au.
log_Z T_eff log_g V_{Johnson} +0.0 5777 +4.44 -26.75
- source: str
filename of the sun library
- data: SimpleTable
data table
- units: tuple
detected units from file header
- wavelength: array
wavelength (with units when found)
- flux: array
flux(wavelength) values (with units when provided)
- distance: float
distance to the observed Sun (default, 1 au)
- flavor: str, (default theoretical)
either ‘observed’ using the stis reference, or ‘theoretical’ for the Kurucz model.
-
property
flux
¶ flux(wavelength) values (with units when provided)
-
property
wavelength
¶ wavelength (with units when found)
pyphot.svo module¶
Link to the SVO filter profile service
http://svo2.cab.inta-csic.es/theory/fps/
If your research benefits from the use of the SVO Filter Profile Service, include the following acknowledgement in your publication:
> This research has made use of the SVO Filter Profile Service > (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO > through grant AYA2017-84089.
and please include the following references in your publication:
The SVO Filter Profile Service. Rodrigo, C., Solano, E., Bayo, A., 2012; https://ui.adsabs.harvard.edu/abs/2012ivoa.rept.1015R/abstract
The SVO Filter Profile Service. Rodrigo, C., Solano, E., 2020; https://ui.adsabs.harvard.edu/abs/2020sea..confE.182R/abstract
Example¶
>>> lst = "2MASS/2MASS.J 2MASS/2MASS.H 2MASS/2MASS.Ks HST/ACS_WFC.F475W HST/ACS_WFC.F814W".split()
objects = [get_pyphot_filter(k) for k in lst]
-
pyphot.svo.
get_pyphot_astropy_filter
(identifier: str)[source]¶ Query the SVO filter profile service and return the filter object
- identifierstr
SVO identifier of the filter profile e.g., 2MASS/2MASS.Ks HST/ACS_WFC.F475W The identifier is the first column on the webpage of the facilities.
- filterpyphot.astropy.UnitFilter
Filter object
-
pyphot.svo.
get_pyphot_filter
(identifier: str)[source]¶ Query the SVO filter profile service and return the filter object
- identifierstr
SVO identifier of the filter profile e.g., 2MASS/2MASS.Ks HST/ACS_WFC.F475W The identifier is the first column on the webpage of the facilities.
- filterpyphot.astropy.UnitFilter
Filter object
pyphot.vega module¶
Handle vega spec/mags/fluxes manipulations
Works with both ascii and hd5 files for back-compatibility
Vega.wavelength and Vega.flux have now units!
-
class
pyphot.vega.
Vega
(source='/github/workspace/pyphot/libs/vega.hd5')[source]¶ Bases:
object
Class that handles vega spectrum and references. This class know where to find the Vega synthetic spectrum (Bohlin 2007) in order to compute fluxes and magnitudes in given filters
- source: str
filename of the vega library
- data: SimpleTable
data table
- units: tuple
detected units from file header
- wavelength: array
wavelength (with units when found)
- flux: array
flux(wavelength) values (with units when provided)
An instance can be used as a context manager as:
>>> filters = ['HST_WFC3_F275W', 'HST_WFC3_F336W', 'HST_WFC3_F475W', 'HST_WFC3_F814W', 'HST_WFC3_F110W', 'HST_WFC3_F160W'] with Vega() as v: vega_f, vega_mag, flamb = v.getSed(filters) print vega_f, vega_mag, flamb
-
property
flux
¶ flux(wavelength) values (with units when provided)
-
property
wavelength
¶ wavelength (with units when found)