tonic.functional
#
Submodules#
tonic.functional.crop
tonic.functional.decimate
tonic.functional.denoise
tonic.functional.drop_event
tonic.functional.drop_pixel
tonic.functional.event_downsampling
tonic.functional.refractory_period
tonic.functional.spatial_jitter
tonic.functional.time_jitter
tonic.functional.time_skew
tonic.functional.to_averaged_timesurface
tonic.functional.to_bina_rep
tonic.functional.to_frame
tonic.functional.to_timesurface
tonic.functional.to_voxel_grid
tonic.functional.uniform_noise
Package Contents#
Functions#
|
Crops the sensor size to a smaller sensor. |
|
Returns 1/n events for each pixel location. |
|
Drops events that are 'not sufficiently connected to other events in the recording.' In |
|
Drops events located in a randomly chosen box area. The size of the box area is defined by a |
|
Drops events in a certain time interval with a length proportional to a specified ratio of |
|
Randomly drops events with drop_probability. |
|
Drops events for pixel locations that fire. |
|
Spatio-temporally downsample using with the following steps: |
|
Spatio-temporally downsample using the integrator method coupled with a differentiator to effectively |
|
Sets a refractory period for each pixel, during which events will be ignored/discarded. We |
|
Changes x/y coordinate for each event by adding samples from a multivariate Gaussian |
|
Changes timestamp for each event by drawing samples from a Gaussian distribution and adding |
|
Skew all event timestamps according to a linear transform, potentially sampled from a |
|
Representation that creates averaged timesurfaces for each event for one recording. |
|
Representation that takes T*B binary event frames to produce a sequence of T frames of N-bit |
|
Accumulate events to frames by slicing along constant time (time_window), constant number of |
|
Representation that creates timesurfaces for each event in the recording. Modeled after the |
|
Build a voxel grid with bilinear interpolation in the time domain from a set of events. |
|
Adds a fixed number of noise events that are uniformly distributed across sensor size |
- tonic.functional.crop_numpy(events, sensor_size, target_size)[source]#
Crops the sensor size to a smaller sensor.
x’ = x - new_sensor_start_x y’ = y - new_sensor_start_y
- Parameters:
events – ndarray of shape [num_events, num_event_channels]
sensor_size – size of the sensor that was used [W,H]
target_size – size of the sensor that was used [W’,H’]
- Returns:
events - events within the crop box sensor_size - cropped to target_size
- tonic.functional.decimate_numpy(events: numpy.ndarray, n: int)[source]#
Returns 1/n events for each pixel location.
- Parameters:
events (numpy.ndarray) – structured numpy array of events
n (int) – filter rate
- tonic.functional.denoise_numpy(events, filter_time=10000)[source]#
Drops events that are ‘not sufficiently connected to other events in the recording.’ In practise that means that an event is dropped if no other event occured within a spatial neighbourhood of 1 pixel and a temporal neighbourhood of filter_time time units. Useful to filter noisy recordings where events occur isolated in time.
- Parameters:
events – ndarray of shape [num_events, num_event_channels]
filter_time – maximum temporal distance to next event, otherwise dropped. Lower values will mean higher constraints, therefore less events.
- Returns:
filtered set of events.
- tonic.functional.drop_by_area_numpy(events: numpy.ndarray, sensor_size: Tuple, area_ratio: Union[float, Tuple[float]] = 0.2)[source]#
Drops events located in a randomly chosen box area. The size of the box area is defined by a specified ratio of the sensor size.
- Parameters:
events (np.ndarray) – ndarray of shape [num_events, num_event_channels]
sensor_size (Tuple) – size of the sensor that was used [W,H,P]
area_ratio (Union[float, Tuple[float]], optional) – Ratio of the sensor resolution that determines the size of the box area where events are dropped. - if a float, the value is used to calculate the size of the box area - if a tuple of 2 floats, the ratio is randomly chosen in [min, max) Defaults to 0.2.
- Returns:
augmented events that were not dropped (i.e., the events that are not located in the box area).
- Return type:
np.ndarray
- tonic.functional.drop_by_time_numpy(events: numpy.ndarray, duration_ratio: Union[float, Tuple[float]] = 0.2)[source]#
Drops events in a certain time interval with a length proportional to a specified ratio of the original length.
- Parameters:
events (np.ndarray) – ndarray of shape [num_events, num_event_channels]
duration_ratio (Union[float, Tuple[float]], optional) – the length of the dropped time interval, expressed in a ratio of the original sequence duration. - If a float, the value is used to calculate the interval length - If a tuple of 2 floats, the ratio is randomly chosen in [min, max). Defaults to 0.2.
- Returns:
augmented events that were not dropped (i.e., the events that are not in the time interval).
- Return type:
np.ndarray
- tonic.functional.drop_event_numpy(events: numpy.ndarray, drop_probability: float)[source]#
Randomly drops events with drop_probability.
- Parameters:
events (numpy.ndarray) – ndarray of shape [num_events, num_event_channels].
drop_probability (float) – probability of dropping out event.
- Returns:
augmented events that were not dropped.
- tonic.functional.drop_pixel_numpy(events: numpy.ndarray, coordinates)[source]#
Drops events for pixel locations that fire.
- Parameters:
events (numpy.ndarray) – ndarray of shape [num_events, num_event_channels]
coordinates – list of (x,y) coordinates for which all events will be deleted.
- Returns:
subset of original events.
- tonic.functional.integrator_downsample(events: numpy.ndarray, sensor_size: tuple, target_size: tuple, dt: float, noise_threshold: int = 0, differentiator_call: bool = False)[source]#
Spatio-temporally downsample using with the following steps:
Differencing of ON and OFF events to counter camera shake or jerk.
2. Use an integrate-and-fire (I-F) neuron model with a noise threshold similar to the membrane potential threshold in the I-F model to eliminate high-frequency noise.
Multiply x/y values by a spatial_factor obtained by dividing sensor size by the target size.
- Parameters:
events (ndarray) – ndarray of shape [num_events, num_event_channels].
sensor_size (tuple) – a 3-tuple of x,y,p for sensor_size.
target_size (tuple) – a 2-tuple of x,y denoting new down-sampled size for events to be re-scaled to (new_width, new_height).
dt (float) – temporal resolution of events in milliseconds.
noise_threshold (int) – number of events before a spike representing a new event is emitted.
differentiator_call (bool) – Preserve frame spikes for differentiator method in order to optimise differentiator method.
- Returns:
the spatio-temporally downsampled input events using the integrator method.
- tonic.functional.differentiator_downsample(events: numpy.ndarray, sensor_size: tuple, target_size: tuple, dt: float, differentiator_time_bins: int = 2, noise_threshold: int = 0)[source]#
Spatio-temporally downsample using the integrator method coupled with a differentiator to effectively downsample large object sizes relative to downsampled pixel resolution in the DVS camera’s visual field.
Incorporates the paper Ghosh et al. 2023, Insect-inspired Spatio-temporal Downsampling of Event-based Input, https://doi.org/10.1145/3589737.3605994
- Parameters:
events (ndarray) – ndarray of shape [num_events, num_event_channels].
sensor_size (tuple) – a 3-tuple of x,y,p for sensor_size.
target_size (tuple) – a 2-tuple of x,y denoting new down-sampled size for events to be re-scaled to (new_width, new_height).
dt (float) – step size for simulation, in ms.
differentiator_time_bins (int) – number of equally spaced time bins with respect to the dt to be used for the differentiator.
noise_threshold (int) – number of events before a spike representing a new event is emitted.
- Returns:
the spatio-temporally downsampled input events using the differentiator method.
- tonic.functional.refractory_period_numpy(events: numpy.ndarray, refractory_period: float)[source]#
Sets a refractory period for each pixel, during which events will be ignored/discarded. We keep events if:
\[t_n - t_{n-1} > t_{refrac}\]- Parameters:
events (numpy.ndarray) – ndarray of shape [num_events, num_event_channels]
refractory_period (float) – refractory period for each pixel in microseconds
- Returns:
filtered set of events.
- tonic.functional.spatial_jitter_numpy(events: numpy.ndarray, sensor_size: List[int], var_x: float = 1, var_y: float = 1, sigma_xy: float = 0, clip_outliers: bool = False)[source]#
Changes x/y coordinate for each event by adding samples from a multivariate Gaussian distribution. It with the following properties:
\[ \begin{align}\begin{aligned}mean = [x,y]\\\Sigma = [[var_x, sigma_{xy}],[sigma_{xy}, var_y]]\end{aligned}\end{align} \]Jittered events that lie outside the focal plane will be dropped if clip_outliers is True.
- Parameters:
events (numpy.ndarray) – ndarray of shape [num_events, num_event_channels]
var_x (float) – squared sigma value for the distribution in the x direction
var_y (float) – squared sigma value for the distribution in the y direction
sigma_xy (float) – changes skewness of distribution, only change if you want shifts along diagonal axis.
clip_outliers (bool) – when True, events that have been jittered outside the sensor size will be dropped.
sensor_size (List[int]) –
- Returns:
array of spatially jittered events.
- tonic.functional.time_jitter_numpy(events: numpy.ndarray, std: float = 1, clip_negative: bool = False, sort_timestamps: bool = False)[source]#
Changes timestamp for each event by drawing samples from a Gaussian distribution and adding them to each timestamp.
- Parameters:
events (numpy.ndarray) – ndarray of shape [num_events, num_event_channels]
std (float) – the standard deviation of the time jitter
clip_negative (bool) – drops events that have negative timestamps
sort_timestamps (bool) – sort the events by timestamps after jittering
- Returns:
temporally jittered set of events.
- tonic.functional.time_skew_numpy(events: numpy.ndarray, coefficient: float, offset: int = 0)[source]#
Skew all event timestamps according to a linear transform, potentially sampled from a distribution of acceptable functions.
- Parameters:
events (numpy.ndarray) – ndarray of shape [num_events, num_event_channels].
coefficient (float) – a real-valued multiplier applied to the timestamps of the events. E.g. a coefficient of 2.0 will double the effective delay between any pair of events.
offset (int) – value by which the timestamps will be shifted after multiplication by the coefficient. Negative offsets are permissible but may result in in an exception if timestamps are shifted below 0.
- Returns:
the input events with rewritten timestamps.
- tonic.functional.to_averaged_timesurface_numpy(events, sensor_size, cell_size, surface_size, time_window, tau, decay)[source]#
Representation that creates averaged timesurfaces for each event for one recording.
Taken from the paper Sironi et al. 2018, HATS: Histograms of averaged time surfaces for robust event-based object classification https://openaccess.thecvf.com/content_cvpr_2018/papers/Sironi_HATS_Histograms_of_CVPR_2018_paper.pdf :param cell_size: size of each square in the grid :type cell_size: int :param surface_size: has to be odd :type surface_size: int :param time_window: how far back to look for past events for the time averaging. Expressed in microseconds. :type time_window: int :param tau: time constant to decay events around occuring event with. Expressed in microseconds. :type tau: int :param decay: can be either ‘lin’ or ‘exp’, corresponding to linear or exponential decay. :type decay: str
- Returns:
array of histograms (numpy.Array with shape (n_cells, n_pols, surface_size, surface_size))
- tonic.functional.to_bina_rep_numpy(event_frames: numpy.ndarray, n_frames: int = 1, n_bits: int = 8)[source]#
Representation that takes T*B binary event frames to produce a sequence of T frames of N-bit numbers. To do so, N binary frames are interpreted as a single frame of N-bit representation. Taken from the paper Barchid et al. 2022, Bina-Rep Event Frames: a Simple and Effective Representation for Event-based cameras https://arxiv.org/pdf/2202.13662.pdf.
- Parameters:
event_frames (numpy.ndarray) – numpy.ndarray of shape (T*BxPxHxW). The sequence of event frames.
n_frames (int) – the number T of bina-rep frames.
n_bits (int) – the number N of bits used in the N-bit representation.
- Returns:
(numpy.ndarray) the sequence of bina-rep event frames with dimensions (TxPxHxW).
- tonic.functional.to_frame_numpy(events, sensor_size, time_window=None, event_count=None, n_time_bins=None, n_event_bins=None, overlap=0.0, include_incomplete=False, start_time=None, end_time=None)[source]#
Accumulate events to frames by slicing along constant time (time_window), constant number of events (event_count) or constant number of frames (n_time_bins / n_event_bins).
- Parameters:
events – ndarray of shape [num_events, num_event_channels]
sensor_size – size of the sensor that was used [W,H,P]
time_window (None) – window length in us.
event_count (None) – number of events per frame.
n_time_bins (None) – fixed number of frames, sliced along time axis.
n_event_bins (None) – fixed number of frames, sliced along number of events in the recording.
overlap – overlap between frames defined either in time in us, number of events or number of bins.
include_incomplete (False) – if True, includes overhang slice when time_window or event_count is specified. Not valid for bin_count methods.
- Returns:
numpy array with dimensions (TxPxHxW)
- tonic.functional.to_timesurface_numpy(events, sensor_size: Tuple[int, int, int], dt: float, tau: float, overlap: int = 0, include_incomplete: bool = False)[source]#
Representation that creates timesurfaces for each event in the recording. Modeled after the paper Lagorce et al. 2016, Hots: a hierarchy of event-based time-surfaces for pattern recognition https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7508476. Unlike the paper, surfaces are always generated across the whole sensor, not just around the event.
- Parameters:
sensor_size (Tuple[int, int, int]) – x/y/p dimensions of the sensor
dt (float) – time interval at which the time-surfaces are accumulated
tau (float) – time constant to decay events around occuring event with.
overlap (int) –
include_incomplete (bool) –
- Returns:
array of timesurfaces with dimensions (n_events//dt, p, h , w)
- tonic.functional.to_voxel_grid_numpy(events, sensor_size, n_time_bins=10)[source]#
Build a voxel grid with bilinear interpolation in the time domain from a set of events. Implements the event volume from Zhu et al. 2019, Unsupervised event-based learning of optical flow, depth, and egomotion.
- Parameters:
events – ndarray of shape [num_events, num_event_channels]
sensor_size – size of the sensor that was used [W,H].
n_time_bins – number of bins in the temporal axis of the voxel grid.
- Returns:
numpy array of n event volumes (n,w,h,t)
- tonic.functional.uniform_noise_numpy(events: numpy.ndarray, sensor_size: Tuple[int, int, int], n: int)[source]#
Adds a fixed number of noise events that are uniformly distributed across sensor size dimensions.
- Parameters:
events (numpy.ndarray) – ndarray of shape (n_events, n_event_channels)
sensor_size (Tuple[int, int, int]) – 3-tuple of integers for x, y, p
n (int) – the number of noise events added.