Coverage for /opt/obspy/update-docs/src/obspy/obspy/core/trace : 90%

Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
# -*- coding: utf-8 -*- Module for handling ObsPy Trace objects.
:copyright: The ObsPy Development Team (devs@obspy.org) :license: GNU Lesser General Public License, Version 3 (http://www.gnu.org/copyleft/lesser.html) """
""" A container for additional header information of a ObsPy Trace object.
A ``Stats`` object may contain all header information (also known as meta data) of a :class:`~obspy.core.trace.Trace` object. Those headers may be accessed or modified either in the dictionary style or directly via a corresponding attribute. There are various default attributes which are required by every waveform import and export modules within ObsPy such as :mod:`obspy.mseed`.
:type header: dict or :class:`~obspy.core.trace.Stats`, optional :param header: Dictionary containing meta information of a single :class:`~obspy.core.trace.Trace` object. Possible keywords are summarized in the following `Default Attributes`_ section.
.. rubric:: Basic Usage
>>> stats = Stats() >>> stats.network = 'BW' >>> stats['network'] 'BW' >>> stats['station'] = 'MANZ' >>> stats.station 'MANZ'
.. rubric:: _`Default Attributes`
``sampling_rate`` : float, optional Sampling rate in hertz (default value is 1.0). ``delta`` : float, optional Sample distance in seconds (default value is 1.0). ``calib`` : float, optional Calibration factor (default value is 1.0). ``npts`` : int, optional Number of sample points (default value is 0, which implies that no data is present). ``network`` : string, optional Network code (default is an empty string). ``location`` : string, optional Location code (default is an empty string). ``station`` : string, optional Station code (default is an empty string). ``channel`` : string, optional Channel code (default is an empty string). ``starttime`` : :class:`~obspy.core.utcdatetime.UTCDateTime`, optional Date and time of the first data sample given in UTC (default value is "1970-01-01T00:00:00.0Z"). ``endtime`` : :class:`~obspy.core.utcdatetime.UTCDateTime`, optional Date and time of the last data sample given in UTC (default value is "1970-01-01T00:00:00.0Z").
.. rubric:: Notes
(1) The attributes ``sampling_rate`` and ``delta`` are linked to each other. If one of the attributes is modified the other will be recalculated.
>>> stats = Stats() >>> stats.sampling_rate 1.0 >>> stats.delta = 0.005 >>> stats.sampling_rate 200.0
(2) The attributes ``starttime``, ``npts``, ``sampling_rate`` and ``delta`` are monitored and used to automatically calculate the ``endtime``.
>>> stats = Stats() >>> stats.npts = 60 >>> stats.delta = 1.0 >>> stats.starttime = UTCDateTime(2009, 1, 1, 12, 0, 0) >>> stats.endtime UTCDateTime(2009, 1, 1, 12, 0, 59) >>> stats.delta = 0.5 >>> stats.endtime UTCDateTime(2009, 1, 1, 12, 0, 29, 500000)
.. note:: The attribute ``endtime`` is currently calculated as ``endtime = starttime + (npts-1) * delta``. This behaviour may change in the future to ``endtime = starttime + npts * delta``.
(3) The attribute ``endtime`` is read only and can not be modified.
>>> stats = Stats() >>> stats.endtime = UTCDateTime(2009, 1, 1, 12, 0, 0) Traceback (most recent call last): ... AttributeError: Attribute "endtime" in Stats object is read only! >>> stats['endtime'] = UTCDateTime(2009, 1, 1, 12, 0, 0) Traceback (most recent call last): ... AttributeError: Attribute "endtime" in Stats object is read only!
(4) The attribute ``npts`` will be automatically updated from the :class:`~obspy.core.trace.Trace` object.
>>> trace = Trace() >>> trace.stats.npts 0 >>> trace.data = np.array([1, 2, 3, 4]) >>> trace.stats.npts 4 """ 'sampling_rate': 1.0, 'delta': 1.0, 'starttime': UTCDateTime(0), 'endtime': UTCDateTime(0), 'npts': 0, 'calib': 1.0, 'network': '', 'station': '', 'location': '', 'channel': '', }
""" """
""" """ # keys which need to refresh derived values # ensure correct data type # set current key # set derived value: delta # set derived value: endtime else: # prevent a calibration factor of 0 # all other keys else:
""" Return better readable string representation of Stats object. """ 'starttime', 'endtime', 'sampling_rate', 'delta', 'npts', 'calib']
""" An object containing data of a continuous series, such as a seismic trace.
:type data: :class:`~numpy.ndarray` or :class:`~numpy.ma.MaskedArray` :param data: Array of data samples :type header: dict or :class:`~obspy.core.trace.Stats` :param header: Dictionary containing header fields
:var id: A SEED compatible identifier of the trace. :var stats: A container :class:`~obspy.core.trace.Stats` for additional header information of the trace. :var data: Data samples in a :class:`~numpy.ndarray` or :class:`~numpy.ma.MaskedArray`
.. rubric:: Supported Operations
``trace = traceA + traceB`` Merges traceA and traceB into one new trace object. See also: :meth:`Trace.__add__`. ``len(trace)`` Returns the number of samples contained in the trace. That is it es equal to ``len(trace.data)``. See also: :meth:`Trace.__len__`. ``str(trace)`` Returns basic information about the trace object. See also: :meth:`Trace.__str__`. """
# make sure Trace gets initialized with ndarray as self.data # otherwise we could end up with e.g. a list object in self.data msg = "Trace.data must be a NumPy array." raise ValueError(msg) # set some defaults if not set yet # Default values: For detail see # http://www.obspy.org/wiki/\ # KnownIssues#DefaultParameterValuesinPython # set data without changing npts in stats object (for headonly option)
""" Implements rich comparison of Trace objects for "==" operator.
Traces are the same, if both their data and stats are the same. """ # check if other object is a Trace # comparison of Stats objects is supported by underlying AttribDict # comparison of ndarrays is supported by NumPy return False
""" Implements rich comparison of Trace objects for "!=" operator.
Calls __eq__() and returns the opposite. """
""" Too ambiguous, throw an Error. """ raise NotImplementedError("Too ambiguous, therefore not implemented.")
""" Too ambiguous, throw an Error. """ raise NotImplementedError("Too ambiguous, therefore not implemented.")
""" Too ambiguous, throw an Error. """ raise NotImplementedError("Too ambiguous, therefore not implemented.")
""" Too ambiguous, throw an Error. """ raise NotImplementedError("Too ambiguous, therefore not implemented.")
""" Returns short summary string of the current trace.
:rtype: str :return: Short summary string of the current trace containing the SEED identifier, start time, end time, sampling rate and number of points of the current trace.
.. rubric:: Example
>>> tr = Trace(header={'station':'FUR', 'network':'GR'}) >>> str(tr) # doctest: +ELLIPSIS 'GR.FUR.. | 1970-01-01T00:00:00.000000Z - ... | 1.0 Hz, 0 samples' """ # set fixed id width else: # output depending on delta or sampling rate bigger than one if hasattr(self.stats, 'preview') and self.stats.preview: out = out + ' | '\ "%(starttime)s - %(endtime)s | " + \ "%(delta).1f s, %(npts)d samples [preview]" else: out = out + ' | '\ "%(starttime)s - %(endtime)s | " + \ "%(delta).1f s, %(npts)d samples" else: out = out + ' | '\ "%(starttime)s - %(endtime)s | " + \ "%(sampling_rate).1f Hz, %(npts)d samples [preview]" else: "%(starttime)s - %(endtime)s | " + \ "%(sampling_rate).1f Hz, %(npts)d samples" # check for masked array
""" Returns number of data samples of the current trace.
:rtype: int :return: Number of data samples.
.. rubric:: Example
>>> trace = Trace(data=np.array([1, 2, 3, 4])) >>> trace.count() 4 >>> len(trace) 4 """
""" __setattr__ method of Trace object. """ # any change in Trace.data will dynamically set Trace.stats.npts msg = "Trace.data must be a NumPy array." ValueError(msg)
""" __getitem__ method of Trace object.
:rtype: list :return: List of data points """
""" Creates a new Stream containing num copies of this trace.
:rtype num: int :param num: Number of copies. :returns: New ObsPy Stream object.
.. rubric:: Example
>>> from obspy import read >>> tr = read()[0] >>> st = tr * 5 >>> len(st) 5 """ raise TypeError("Integer expected")
""" Splits Trace into new Stream containing num Traces of the same size.
:type num: int :param num: Number of traces in returned Stream. Last trace may contain lesser samples. :returns: New ObsPy Stream object.
.. rubric:: Example
>>> from obspy import read >>> tr = read()[0] >>> print tr # doctest: +ELLIPSIS BW.RJOB..EHZ | 2009-08-24T00:20:03.000000Z ... | 100.0 Hz, 3000 samples >>> st = tr / 7 >>> print st # doctest: +ELLIPSIS 7 Trace(s) in Stream: BW.RJOB..EHZ | 2009-08-24T00:20:03.000000Z ... | 100.0 Hz, 429 samples BW.RJOB..EHZ | 2009-08-24T00:20:07.290000Z ... | 100.0 Hz, 429 samples BW.RJOB..EHZ | 2009-08-24T00:20:11.580000Z ... | 100.0 Hz, 429 samples BW.RJOB..EHZ | 2009-08-24T00:20:15.870000Z ... | 100.0 Hz, 429 samples BW.RJOB..EHZ | 2009-08-24T00:20:20.160000Z ... | 100.0 Hz, 429 samples BW.RJOB..EHZ | 2009-08-24T00:20:24.450000Z ... | 100.0 Hz, 429 samples BW.RJOB..EHZ | 2009-08-24T00:20:28.740000Z ... | 100.0 Hz, 426 samples """ raise TypeError("Integer expected") else:
""" Splits Trace into new Stream containing Traces with num samples.
:type num: int :param num: Number of samples in each trace in returned Stream. Last trace may contain lesser samples. :returns: New ObsPy Stream object.
.. rubric:: Example
>>> from obspy import read >>> tr = read()[0] >>> print tr # doctest: +ELLIPSIS BW.RJOB..EHZ | 2009-08-24T00:20:03.000000Z ... | 100.0 Hz, 3000 samples >>> st = tr % 800 >>> print st # doctest: +ELLIPSIS 4 Trace(s) in Stream: BW.RJOB..EHZ | 2009-08-24T00:20:03.000000Z ... | 100.0 Hz, 800 samples BW.RJOB..EHZ | 2009-08-24T00:20:11.000000Z ... | 100.0 Hz, 800 samples BW.RJOB..EHZ | 2009-08-24T00:20:19.000000Z ... | 100.0 Hz, 800 samples BW.RJOB..EHZ | 2009-08-24T00:20:27.000000Z ... | 100.0 Hz, 600 samples """
fill_value=None, sanity_checks=True): """ Adds another Trace object to current trace.
:type method: ``0`` or ``1``, optional :param method: Method to handle overlaps of traces. Defaults to ``0``. See the `Handling Overlaps`_ section below for further details. :type fill_value: int or float, ``'latest'`` or ``'interpolate'``, optional :param fill_value: Fill value for gaps. Defaults to ``None``. Traces will be converted to NumPy masked arrays if no value is given and gaps are present. If the keyword ``'latest'`` is provided it will use the latest value before the gap. If keyword ``'interpolate'`` is provided, missing values are linearly interpolated (not changing the data type e.g. of integer valued traces). See the `Handling Gaps`_ section below for further details. :type interpolation_samples: int, optional :param interpolation_samples: Used only for ``method=1``. It specifies the number of samples which are used to interpolate between overlapping traces. Defaults to ``0``. If set to ``-1`` all overlapping samples are interpolated. :type sanity_checks: boolean, optional :param sanity_checks: Enables some sanity checks before merging traces. Defaults to ``True``.
Trace data will be converted into a NumPy masked array data type if any gaps are present. This behavior may be prevented by setting the ``fill_value`` parameter. The ``method`` argument controls the handling of overlapping data values.
Sampling rate, data type and trace.id of both traces must match.
.. rubric:: _`Handling Overlaps`
====== =============================================================== Method Description ====== =============================================================== 0 Discard overlapping data. Overlaps are essentially treated the same way as gaps::
Trace 1: AAAAAAAA Trace 2: FFFFFFFF 1 + 2 : AAAA----FFFF
Contained traces with differing data will be marked as gap::
Trace 1: AAAAAAAAAAAA Trace 2: FF 1 + 2 : AAAA--AAAAAA 1 Discard data of the previous trace assuming the following trace contains data with a more correct time value. The parameter ``interpolation_samples`` specifies the number of samples used to linearly interpolate between the two traces in order to prevent steps. Note that if there are gaps inside, the returned array is still a masked array, only if ``fill_value`` is set, the returned array is a normal array and gaps are filled with fill value.
No interpolation (``interpolation_samples=0``)::
Trace 1: AAAAAAAA Trace 2: FFFFFFFF 1 + 2 : AAAAFFFFFFFF
Interpolate first two samples (``interpolation_samples=2``)::
Trace 1: AAAAAAAA Trace 2: FFFFFFFF 1 + 2 : AAAACDFFFFFF (interpolation_samples=2)
Interpolate all samples (``interpolation_samples=-1``)::
Trace 1: AAAAAAAA Trace 2: FFFFFFFF 1 + 2 : AAAABCDEFFFF
Any contained traces with different data will be discarded::
Trace 1: AAAAAAAAAAAA (contained trace) Trace 2: FF 1 + 2 : AAAAAAAAAAAA ====== ===============================================================
.. rubric:: _`Handling gaps`
1. Traces with gaps and ``fill_value=None`` (default)::
Trace 1: AAAA Trace 2: FFFF 1 + 2 : AAAA----FFFF
2. Traces with gaps and given ``fill_value=0``::
Trace 1: AAAA Trace 2: FFFF 1 + 2 : AAAA0000FFFF
3. Traces with gaps and given ``fill_value='latest'``::
Trace 1: ABCD Trace 2: FFFF 1 + 2 : ABCDDDDDFFFF
4. Traces with gaps and given ``fill_value='interpolate'``::
Trace 1: AAAA Trace 2: FFFF 1 + 2 : AAAABCDEFFFF """ raise TypeError # check id # check sample rate # check calibration factor raise TypeError("Calibration factor differs") # check data type # check times else: # check whether to use the latest value to fill a gap # create the returned trace # check if overlap or gap # overlap # check if data are the same fill_value) except IndexError: # contained trace data = [lt.data] else: # include left and right sample (delta + 2) interpolation_samples + 2) # cut ls and rs and ensure correct data type lt.data.dtype) rt.data[interpolation_samples:]] else: raise NotImplementedError # contained trace # check if data are the same else: raise NotImplementedError # exact fit - merge both traces else: # gap # use fixed value or interpolate in between # merge traces depending on numpy array type else:
""" Returns a SEED compatible identifier of the trace.
:rtype: str :return: SEED identifier
The SEED identifier contains the network, station, location and channel code for the current Trace object.
.. rubric:: Example
>>> meta = {'station': 'MANZ', 'network': 'BW', 'channel': 'EHZ'} >>> tr = Trace(header=meta) >>> tr.getId() 'BW.MANZ..EHZ' >>> tr.id 'BW.MANZ..EHZ' """
""" Creates a simple graph of the current trace.
Various options are available to change the appearance of the waveform plot. Please see :meth:`~obspy.core.stream.Stream.plot` method for all possible options.
.. rubric:: Example
>>> from obspy import read >>> st = read() >>> tr = st[0] >>> tr.plot() # doctest: +SKIP
.. plot::
from obspy import read st = read() tr = st[0] tr.plot() """
""" Creates a spectrogram plot of the trace.
For details on kwargs that can be used to customize the spectrogram plot see :func:`~obspy.imaging.spectrogram.spectrogram`.
.. rubric:: Example
>>> from obspy import read >>> st = read() >>> tr = st[0] >>> tr.spectrogram() # doctest: +SKIP
.. plot::
from obspy import read st = read() tr = st[0] tr.spectrogram(sphinx=True) """ # set some default values
""" Saves current trace into a file.
:type filename: string :param filename: The name of the file to write. :type format: string :param format: The format to write must be specified. One of ``"MSEED"``, ``"GSE2"``, ``"SAC"``, ``"SACXY"``, ``"Q"``, ``"SH_ASC"``, ``"SEGY"``, ``"SU"``, ``"WAV"``, ``"PICKLE"``. See :meth:`obspy.core.stream.Stream.write` method for all possible formats. :param kwargs: Additional keyword arguments passed to the underlying waveform writer method.
.. rubric:: Example
>>> tr = Trace() >>> tr.write("out.mseed", format="MSEED") # doctest: +SKIP """ # we need to import here in order to prevent a circular import of # Stream and Trace classes
fill_value=None): """ Cuts current trace to given start time. For more info see :meth:`~obspy.core.trace.Trace.trim`.
.. rubric:: Example
>>> tr = Trace(data=np.arange(0, 10)) >>> tr.stats.delta = 1.0 >>> tr._ltrim(tr.stats.starttime + 8) >>> tr.data array([8, 9]) >>> tr.stats.starttime UTCDateTime(1970, 1, 1, 0, 0, 8) """ raise TypeError # check if in boundary self.stats.sampling_rate) # due to rounding and npts starttime must always be right of # self.stats.starttime, rtrim relies on it float(self.stats.sampling_rate) self.stats.sampling_rate) else: self.stats.sampling_rate, 7))) * -1 # Adjust starttime only if delta is greater than zero or if the values # are padded with masked arrays. fill_value) except ValueError: # createEmptyDataChunk returns negative ValueError ?? for # too large number of points, e.g. 189336539799 raise Exception("Time offset between starttime and " "trace.starttime too large") self.data = np.empty(0, dtype=org_dtype) return
""" Cuts current trace to given end time. For more info see :meth:`~obspy.core.trace.Trace.trim`.
.. rubric:: Example
>>> tr = Trace(data=np.arange(0, 10)) >>> tr.stats.delta = 1.0 >>> tr._rtrim(tr.stats.starttime + 2) >>> tr.data array([0, 1, 2]) >>> tr.stats.endtime UTCDateTime(1970, 1, 1, 0, 0, 2) """ raise TypeError # check if in boundary self.stats.sampling_rate) - self.stats.npts + 1 else: # solution for #127, however some tests need to be changed #delta = -1*int(math.floor(round((self.stats.endtime - endtime) * \ # self.stats.sampling_rate, 7))) self.stats.sampling_rate, 7))) except ValueError: # createEmptyDataChunk returns negative ValueError ?? for # too large number of pointes, e.g. 189336539799 raise Exception("Time offset between starttime and " + "trace.starttime too large") delta * self.stats.delta # cut from right
nearest_sample=True, fill_value=None): """ Cuts current trace to given start and end time.
:type starttime: :class:`~obspy.core.utcdatetime.UTCDateTime`, optional :param starttime: Specify the start time. :type endtime: :class:`~obspy.core.utcdatetime.UTCDateTime`, optional :param endtime: Specify the end time. :type pad: bool, optional :param pad: Gives the possibility to trim at time points outside the time frame of the original trace, filling the trace with the given ``fill_value``. Defaults to ``False``. :type nearest_sample: bool, optional :param nearest_sample: If set to ``True``, the closest sample is selected, if set to ``False``, the next sample containing the time is selected. Defaults to ``True``.
Given the following trace containing 4 samples, "|" are the sample points, "A" is the requested starttime::
| A| | |
``nearest_sample=True`` will select the second sample point, ``nearest_sample=False`` will select the first sample point.
:type fill_value: int, float or ``None``, optional :param fill_value: Fill value for gaps. Defaults to ``None``. Traces will be converted to NumPy masked arrays if no value is given and gaps are present.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object.
.. rubric:: Example
>>> tr = Trace(data=np.arange(0, 10)) >>> tr.stats.delta = 1.0 >>> t = tr.stats.starttime >>> tr.trim(t + 2.000001, t + 7.999999) >>> tr.data array([2, 3, 4, 5, 6, 7, 8]) """ # check time order and swap eventually raise ValueError("startime is larger than endtime") # cut it fill_value=fill_value) fill_value=fill_value)
""" Returns a new Trace object with data going from start to end time.
:type starttime: :class:`~obspy.core.utcdatetime.UTCDateTime` :param starttime: Specify the start time of slice. :type endtime: :class:`~obspy.core.utcdatetime.UTCDateTime` :param endtime: Specify the end time of slice. :return: New :class:`~obspy.core.trace.Trace` object. Does not copy data but just passes a reference to it.
.. rubric:: Example
>>> tr = Trace(data=np.arange(0, 10)) >>> tr.stats.delta = 1.0 >>> t = tr.stats.starttime >>> tr2 = tr.slice(t + 2, t + 8) >>> tr2.data array([2, 3, 4, 5, 6, 7, 8]) """
""" Verifies current trace object against available meta data.
.. rubric:: Example
>>> tr = Trace(data=np.array([1,2,3,4])) >>> tr.stats.npts = 100 >>> tr.verify() #doctest: +ELLIPSIS Traceback (most recent call last): ... Exception: ntps(100) differs from data size(4) """ msg = "End time(%s) before start time(%s)" raise Exception(msg % (self.stats.endtime, self.stats.starttime)) msg = "Sample rate(%f) * time delta(%.4lf) + 1 != data len(%d)" raise Exception(msg % (sr, delta, len(self.data))) # Check if the endtime fits the starttime, npts and sampling_rate. (self.stats.npts - 1) / float(self.stats.sampling_rate): msg = "Endtime is not the time of the last sample." raise Exception(msg) msg = "Data size should be 0, but is %d" raise Exception(msg % self.stats.npts) msg = "Attribute stats must be an instance of obspy.core.Stats" raise Exception(msg) self.data.dtype.byteorder not in ["=", "|"]: msg = "Trace data should be stored as numpy.ndarray in the " + \ "system specific byte order." raise Exception(msg)
remove_sensitivity=True, simulate_sensitivity=True, **kwargs): """ Correct for instrument response / Simulate new instrument response.
:type paz_remove: dict, None :param paz_remove: Dictionary containing keys ``'poles'``, ``'zeros'``, ``'gain'`` (A0 normalization factor). Poles and zeros must be a list of complex floating point numbers, gain must be of type float. Poles and Zeros are assumed to correct to m/s, SEED convention. Use ``None`` for no inverse filtering. :type paz_simulate: dict, None :param paz_simulate: Dictionary containing keys ``'poles'``, ``'zeros'``, ``'gain'``. Poles and zeros must be a list of complex floating point numbers, gain must be of type float. Or ``None`` for no simulation. :type remove_sensitivity: bool :param remove_sensitivity: Determines if data is divided by ``paz_remove['sensitivity']`` to correct for overall sensitivity of recording instrument (seismometer/digitizer) during instrument correction. :type simulate_sensitivity: bool :param simulate_sensitivity: Determines if data is multiplied with ``paz_simulate['sensitivity']`` to simulate overall sensitivity of new instrument (seismometer/digitizer) during instrument simulation.
This function corrects for the original instrument response given by `paz_remove` and/or simulates a new instrument response given by `paz_simulate`. For additional information and more options to control the instrument correction/simulation (e.g. water level, demeaning, tapering, ...) see :func:`~obspy.signal.invsim.seisSim`.
`paz_remove` and `paz_simulate` are expected to be dictionaries containing information on poles, zeros and gain (and usually also sensitivity).
If both `paz_remove` and `paz_simulate` are specified, both steps are performed in one go in the frequency domain, otherwise only the specified step is performed.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: Example
>>> from obspy import read >>> from obspy.signal import cornFreq2Paz >>> st = read() >>> tr = st[0] >>> tr.plot() # doctest: +SKIP >>> paz_sts2 = {'poles': [-0.037004+0.037016j, -0.037004-0.037016j, ... -251.33+0j, ... -131.04-467.29j, -131.04+467.29j], ... 'zeros': [0j, 0j], ... 'gain': 60077000.0, ... 'sensitivity': 2516778400.0} >>> paz_1hz = cornFreq2Paz(1.0, damp=0.707) >>> paz_1hz['sensitivity'] = 1.0 >>> tr.simulate(paz_remove=paz_sts2, paz_simulate=paz_1hz) >>> tr.plot() # doctest: +SKIP
.. plot::
from obspy import read from obspy.signal import cornFreq2Paz st = read() tr = st[0] tr.plot() paz_sts2 = {'poles': [-0.037004+0.037016j, -0.037004-0.037016j, -251.33+0j, -131.04-467.29j, -131.04+467.29j], 'zeros': [0j, 0j], 'gain': 60077000.0, 'sensitivity': 2516778400.0} paz_1hz = cornFreq2Paz(1.0, damp=0.707) paz_1hz['sensitivity'] = 1.0 tr.simulate(paz_remove=paz_sts2, paz_simulate=paz_1hz) tr.plot() """ # XXX accepting string "self" and using attached PAZ then paz_remove = self.stats.paz
paz_remove=paz_remove, paz_simulate=paz_simulate, remove_sensitivity=remove_sensitivity, simulate_sensitivity=simulate_sensitivity, **kwargs)
# add processing information to the stats dictionary (paz_remove, remove_sensitivity) (paz_simulate, simulate_sensitivity)
""" Filters the data of the current trace.
:type type: str :param type: String that specifies which filter is applied (e.g. ``"bandpass"``). See the `Supported Filter`_ section below for further details. :param options: Necessary keyword arguments for the respective filter that will be passed on. (e.g. ``freqmin=1.0``, ``freqmax=20.0`` for ``"bandpass"``)
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: _`Supported Filter`
``'bandpass'`` Butterworth-Bandpass (uses :func:`obspy.signal.filter.bandpass`).
``'bandstop'`` Butterworth-Bandstop (uses :func:`obspy.signal.filter.bandstop`).
``'lowpass'`` Butterworth-Lowpass (uses :func:`obspy.signal.filter.lowpass`).
``'highpass'`` Butterworth-Highpass (uses :func:`obspy.signal.filter.highpass`).
``'lowpassCheby2'`` Cheby2-Lowpass (uses :func:`obspy.signal.filter.lowpassCheby2`).
``'lowpassFIR'`` (experimental) FIR-Lowpass (uses :func:`obspy.signal.filter.lowpassFIR`).
``'remezFIR'`` (experimental) Minimax optimal bandpass using Remez algorithm (uses :func:`obspy.signal.filter.remezFIR`).
.. rubric:: Example
>>> from obspy import read >>> st = read() >>> tr = st[0] >>> tr.filter("highpass", freq=1.0) >>> tr.plot() # doctest: +SKIP
.. plot::
from obspy import read st = read() tr = st[0] tr.filter("highpass", freq=1.0) tr.plot() """ # retrieve function call from entry points # filtering # the options dictionary is passed as kwargs to the function that is # mapped according to the filter_functions dictionary # add processing information to the stats dictionary
""" Runs a triggering algorithm on the data of the current trace.
:param type: String that specifies which trigger is applied (e.g. ``'recstalta'``). See the `Supported Trigger`_ section below for further details. :param options: Necessary keyword arguments for the respective trigger that will be passed on. (e.g. ``sta=3``, ``lta=10``) Arguments ``sta`` and ``lta`` (seconds) will be mapped to ``nsta`` and ``nlta`` (samples) by multiplying with sampling rate of trace. (e.g. ``sta=3``, ``lta=10`` would call the trigger with 3 and 10 seconds average, respectively)
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: _`Supported Trigger`
``'classicstalta'`` Computes the classic STA/LTA characteristic function (uses :func:`obspy.signal.trigger.classicSTALTA`).
``'recstalta'`` Recursive STA/LTA (uses :func:`obspy.signal.trigger.recSTALTA`).
``'recstaltapy'`` Recursive STA/LTA written in Python (uses :func:`obspy.signal.trigger.recSTALTAPy`).
``'delayedstalta'`` Delayed STA/LTA. (uses :func:`obspy.signal.trigger.delayedSTALTA`).
``'carlstatrig'`` Computes the carlSTATrig characteristic function (uses :func:`obspy.signal.trigger.carlSTATrig`).
``'zdetect'`` Z-detector (uses :func:`obspy.signal.trigger.zDetect`).
.. rubric:: Example
>>> from obspy import read >>> st = read() >>> tr = st[0] >>> tr.filter("highpass", freq=1.0) >>> tr.plot() # doctest: +SKIP >>> tr.trigger("recstalta", sta=3, lta=10) >>> tr.plot() # doctest: +SKIP
.. plot::
from obspy import read st = read() tr = st[0] tr.filter("highpass", freq=1.0) tr.plot() tr.trigger('recstalta', sta=3, lta=10) tr.plot() """ # retrieve function call from entry points # convert the two arguments sta and lta to nsta and nlta as used by # actual triggering routines (needs conversion to int, as samples are # used in length of trigger averages)... # triggering # the options dictionary is passed as kwargs to the function that is # mapped according to the trigger_functions dictionary # add processing information to the stats dictionary
strict_length=False): """ Resample trace data using Fourier method.
:type sampling_rate: float :param sampling_rate: The sampling rate of the resampled signal. :type window: array_like, callable, string, float, or tuple, optional :param window: Specifies the window applied to the signal in the Fourier domain. Defaults to ``'hanning'`` window. See :func:`scipy.signal.resample` for details. :type strict_length: bool, optional :param strict_length: Leave traces unchanged for which endtime of trace would change. Defaults to ``False``.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
Uses :func:`scipy.signal.resample`. Because a Fourier method is used, the signal is assumed to be periodic.
.. rubric:: Example
>>> tr = Trace(data=np.array([0.5, 0, 0.5, 1, 0.5, 0, 0.5, 1])) >>> len(tr) 8 >>> tr.stats.sampling_rate 1.0 >>> tr.resample(4.0) >>> len(tr) 32 >>> tr.stats.sampling_rate 4.0 >>> tr.data # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS array([ 0.5 , 0.40432914, 0.3232233 , 0.26903012, 0.25 ... """ # check if endtime changes and this is not explicitly allowed msg = "Endtime of trace would change and strict_length=True." raise ValueError(msg) # do automatic lowpass filtering # be sure filter still behaves good if factor > 16: msg = "Automatic filter design is unstable for resampling " + \ "factors (current sampling rate/new sampling rate) " + \ "above 16. Manual resampling is necessary." raise ArithmeticError(msg) freq = self.stats.sampling_rate * 0.5 / float(factor) self.filter('lowpassCheby2', freq=freq, maxorder=12) # resample # add processing information to the stats dictionary
""" Downsample trace data by an integer factor.
:type factor: int :param factor: Factor by which the sampling rate is lowered by decimation. :type no_filter: bool, optional :param no_filter: Deactivates automatic filtering if set to ``True``. Defaults to ``False``. :type strict_length: bool, optional :param strict_length: Leave traces unchanged for which endtime of trace would change. Defaults to ``False``.
Currently a simple integer decimation is implemented. Only every ``decimation_factor``-th sample remains in the trace, all other samples are thrown away. Prior to decimation a lowpass filter is applied to ensure no aliasing artifacts are introduced. The automatic filtering can be deactivated with ``no_filter=True``.
If the length of the data array modulo ``decimation_factor`` is not zero then the endtime of the trace is changing on sub-sample scale. To abort downsampling in case of changing endtimes set ``strict_length=True``.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: Example
For the example we switch off the automatic pre-filtering so that the effect of the downsampling routine becomes clearer:
>>> tr = Trace(data=np.arange(10)) >>> tr.stats.sampling_rate 1.0 >>> tr.data array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> tr.decimate(4, strict_length=False, no_filter=True) >>> tr.stats.sampling_rate 0.25 >>> tr.data array([0, 4, 8]) """ # check if endtime changes and this is not explicitly allowed
# do automatic lowpass filtering # be sure filter still behaves good "factors above 16. Manual decimation is necessary."
# actual downsampling, as long as sampling_rate is a float we would not # need to convert to float, but let's do it as a safety measure
# add processing information to the stats dictionary
""" Returns the value of the absolute maximum amplitude in the trace.
:return: Value of absolute maximum of ``trace.data``.
.. rubric:: Example
>>> tr = Trace(data=np.array([0, -3, 9, 6, 4])) >>> tr.max() 9 >>> tr = Trace(data=np.array([0, -3, -9, 6, 4])) >>> tr.max() -9 >>> tr = Trace(data=np.array([0.3, -3.5, 9.0, 6.4, 4.3])) >>> tr.max() 9.0 """
""" Method to get the standard deviation of amplitudes in the trace.
:return: Standard deviation of ``trace.data``.
Standard deviation is calculated by numpy method :meth:`~numpy.ndarray.std` on ``trace.data``.
.. rubric:: Example
>>> tr = Trace(data=np.array([0, -3, 9, 6, 4])) >>> tr.std() 4.2614551505325036 >>> tr = Trace(data=np.array([0.3, -3.5, 9.0, 6.4, 4.3])) >>> tr.std() 4.4348618918744247 """
""" Method to differentiate the trace with respect to time.
:type type: ``'gradient'``, optional :param type: Method to use for differentiation. Defaults to ``'gradient'``. See the `Supported Methods`_ section below for further details.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: _`Supported Methods`
``'gradient'`` The gradient is computed using central differences in the interior and first differences at the boundaries. The returned gradient hence has the same shape as the input array. (uses :func:`numpy.gradient`) """ # retrieve function call from entry points # differentiate # add processing information to the stats dictionary
""" Method to integrate the trace with respect to time.
:type type: ``'cumtrapz'``, optional :param type: Method to use for integration. Defaults to ``'cumtrapz'``. See the `Supported Methods`_ section below for further details.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: _`Supported Methods`
``'cumtrapz'`` Trapezoidal rule to cumulatively compute integral (uses :func:`scipy.integrate.cumtrapz`). Result has one sample less then the input!
``'trapz'`` Trapezoidal rule to compute integral from samples (uses :func:`scipy.integrate.trapz`).
``'simps'`` Simpson's rule to compute integral from samples (uses :func:`scipy.integrate.simps`).
``'romb'`` Romberg Integration to compute integral from (2**k + 1) evenly-spaced samples. (uses :func:`scipy.integrate.romb`). """ # retrieve function call from entry points # handle function specific settings # scipy needs to set dx keyword if not given in options else: args = [self.data, self.stats.delta] # integrating # add processing information to the stats dictionary
""" Method to remove a linear trend from the trace.
:type type: ``'linear'``, ``'constant'``, ``'demean'`` or ``'simple'``, optional :param type: Method to use for detrending. Defaults to ``'simple'``. See the `Supported Methods`_ section below for further details.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: _`Supported Methods`
``'simple'`` Subtracts a linear function defined by first/last sample of the trace (uses :func:`obspy.signal.detrend.simple`).
``'linear'`` Fitting a linear function to the trace with least squares and subtracting it (uses :func:`scipy.signal.detrend`).
``'constant'`` or ``'demean'`` Mean of data is subtracted (uses :func:`scipy.signal.detrend`). """ # retrieve function call from entry points # handle function specific settings # scipy need to set the type keyword # detrending # add processing information to the stats dictionary
""" Method to taper the trace.
Optional (and sometimes necessary) options to the tapering function can be provided as args and kwargs. See respective function definitions in `Supported Methods`_ section below.
:type type: str :param type: Type of taper to use for detrending. Defaults to ``'cosine'``. See the `Supported Methods`_ section below for further details.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: _`Supported Methods`
``'cosine'`` Cosine taper, for additional options like taper percentage see: :func:`obspy.signal.invsim.cosTaper`. ``'barthann'`` Modified Bartlett-Hann window. (uses: :func:`scipy.signal.barthann`) ``'bartlett'`` Bartlett window. (uses: :func:`scipy.signal.bartlett`) ``'blackman'`` Blackman window. (uses: :func:`scipy.signal.blackman`) ``'blackmanharris'`` Minimum 4-term Blackman-Harris window. (uses: :func:`scipy.signal.blackmanharris`) ``'bohman'`` Bohman window. (uses: :func:`scipy.signal.bohman`) ``'boxcar'`` Boxcar window. (uses: :func:`scipy.signal.boxcar`) ``'chebwin'`` Dolph-Chebyshev window. (uses: :func:`scipy.signal.chebwin`) ``'flattop'`` Flat top window. (uses: :func:`scipy.signal.flattop`) ``'gaussian'`` Gaussian window with standard-deviation std. (uses: :func:`scipy.signal.gaussian`) ``'general_gaussian'`` Generalized Gaussian window. (uses: :func:`scipy.signal.general_gaussian`) ``'hamming'`` Hamming window. (uses: :func:`scipy.signal.hamming`) ``'hann'`` Hann window. (uses: :func:`scipy.signal.hann`) ``'kaiser'`` Kaiser window with shape parameter beta. (uses: :func:`scipy.signal.kaiser`) ``'nuttall'`` Minimum 4-term Blackman-Harris window according to Nuttall. (uses: :func:`scipy.signal.nuttall`) ``'parzen'`` Parzen window. (uses: :func:`scipy.signal.parzen`) ``'slepian'`` Slepian window. (uses: :func:`scipy.signal.slepian`) ``'triang'`` Triangular window. (uses: :func:`scipy.signal.triang`) """ # retrieve function call from entry points # tapering. tapering functions are expected to accept the number of # samples as first argument and return an array of values between 0 and # 1 with the same length as the data # add processing information to the stats dictionary
""" Method to normalize the trace to its absolute maximum.
:type norm: ``None`` or float :param norm: If not ``None``, trace is normalized by dividing by specified value ``norm`` instead of dividing by its absolute maximum. If a negative value is specified then its absolute value is used.
If ``trace.data.dtype`` was integer it is changing to float.
.. note::
This operation is performed in place on the actual data arrays. The raw data is not accessible anymore afterwards. To keep your original data, use :meth:`~obspy.core.trace.Trace.copy` to create a copy of your trace object. This also makes an entry with information on the applied processing in ``stats.processing`` of this trace.
.. rubric:: Example
>>> tr = Trace(data=np.array([0, -3, 9, 6])) >>> tr.normalize() >>> tr.data array([ 0. , -0.33333333, 1. , 0.66666667]) >>> tr.stats.processing ['normalize:9'] >>> tr = Trace(data=np.array([0.3, -3.5, -9.2, 6.4])) >>> tr.normalize() >>> tr.data array([ 0.0326087 , -0.38043478, -1. , 0.69565217]) >>> tr.stats.processing ['normalize:-9.2'] """ # normalize, use norm-kwarg otherwise normalize to 1 msg = "Normalizing with negative values is forbidden. " + \ "Using absolute value." warnings.warn(msg) else:
# add processing information to the stats dictionary
""" Returns a deepcopy of the trace.
:return: Copy of trace.
This actually copies all data in the trace and does not only provide another pointer to the same data. At any processing step if the original data has to be available afterwards, this is the method to use to make a copy of the trace.
.. rubric:: Example
Make a Trace and copy it:
>>> tr = Trace(data=np.random.rand(10)) >>> tr2 = tr.copy()
The two objects are not the same:
>>> tr2 is tr False
But they have equal data (before applying further processing):
>>> tr2 == tr True
The following example shows how to make an alias but not copy the data. Any changes on ``tr3`` would also change the contents of ``tr``.
>>> tr3 = tr >>> tr3 is tr True >>> tr3 == tr True """
""" Adds the given informational string to the `processing` field in the trace's :class:`~obspy.core.trace.stats.Stats` object. """
""" Splits Trace object containing gaps using a NumPy masked array into several traces.
:rtype: list :returns: List of split traces. A gapless trace will still be returned as list with only one entry. """ # no gaps return [self] raise NotImplementedError("step not supported")
""" For convenient plotting compute a Numpy array of seconds since starttime corresponding to the samples in Trace.
:rtype: :class:`~numpy.ndarray` or :class:`~numpy.ma.MaskedArray` :returns: An array of time samples in an :class:`~numpy.ndarray` if the trace doesn't have any gaps or a :class:`~numpy.ma.MaskedArray` otherwise. """ # Check if the data is a ma.maskedarray
if __name__ == '__main__': import doctest doctest.testmod(exclude_empty=True) |