Dataset classes#

Dataset base class#

class tonic.Dataset(save_to: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]#

Base class for Tonic datasets which download public data.

Contains a few helper function to reduce duplicated code.

_check_exists()[source]#

This function is supposed to do some lightweight checking to see if the downloaded files are present and extracted if need be.

Hashing all downloaded files takes too long for large datasets.

_folder_contains_at_least_n_files_of_type(n_files: int, file_type: str) bool[source]#

Check if the target folder folder_name contains at least a minimum amount of files, hinting that the original archive has probably been extracted.

_is_file_present() bool[source]#

Check if the dataset file (can be .zip, .rosbag, .hdf5,…) is present on disk.

No hashing.

download() None[source]#

Downloads from a given url, places into target folder and verifies the file hash.

Caching#

class tonic.MemoryCachedDataset(dataset: Iterable, device: Optional[str] = None, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]#

MemoryCachedDataset caches the samples to memory to substantially improve data loading speeds. However you have to keep a close eye on memory consumption while loading your samples, which can increase rapidly when converting events to rasters/frames. If your transformed dataset doesn’t fit into memory, yet you still want to cache samples to speed up training, consider using DiskCachedDataset instead.

Parameters
  • dataset – Dataset to be cached to memory.

  • device – Device to cache to. This is preferably a torch device. Will cache to CPU memory if None (default).

  • transform – Transforms to be applied on the data

  • target_transform – Transforms to be applied on the label/targets

  • transforms – A callable of transforms that is applied to both data and labels at the same time.

class tonic.DiskCachedDataset(dataset: Iterable, cache_path: str, reset_cache: bool = False, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None, num_copies: int = 1, compress: bool = True)[source]#

DiskCachedDataset caches the data samples to the hard drive for subsequent reads, thereby potentially improving data loading speeds. If dataset is None, then the length of this dataset will be inferred from the number of files in the caching folder. Pay attention to the cache path you’re providing, as DiskCachedDataset will simply check if there is a file present with the index that it is looking for. When using train/test splits, it is wise to also take that into account in the cache path.

Note

When you change the transform that is applied before caching, DiskCachedDataset cannot know about this and will present you with an old file. To avoid this you either have to clear your cache folder manually when needed, incorporate all transformation parameters into the cache path which creates a tree of cache files or use reset_cache=True.

Note

Caching Pytorch tensors will write numpy arrays to disk, so be careful when loading the sample and you expect a tensor. The recommendation is to defer the transform to tensor as late as possible.

Parameters
  • dataset – Dataset to be cached to disk. Can be None, if only files in cache_path should be used.

  • cache_path – The preferred path where the cache will be written to and read from.

  • reset_cache – When True, will clear out the cache path during initialisation. Default is False

  • transform – Transforms to be applied on the data

  • target_transform – Transforms to be applied on the label/targets

  • transforms – A callable of transforms that is applied to both data and labels at the same time.

  • num_copies – Number of copies of each sample to be cached. This is a useful parameter if the dataset is being augmented with slow, random transforms.

  • compress – Whether to apply lightweight lzf compression, default is True.

class tonic.CachedDataset(*args, **kwargs)[source]#

Deprecated class that points to DiskCachedDataset for now but will be removed in a future release.

Please use MemoryCachedDataset or DiskCachedDataset in the future.

Slicing#

class tonic.SlicedDataset(dataset: Iterable, slicer: tonic.slicers.Slicer, metadata_path: Optional[str] = None, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]#

The primary use case for a SlicedDataset is to cut existing examples in a dataset into smaller chunks. For that it takes an iterable dataset and a slicing method as input. It then generates metadata about the slices and where to find them in each original sample. The new dataset length will be the sum of all slices across samples.

Parameters
  • dataset – a dataset object which implements __getitem__ and __len__ methods.

  • slicer – a function which implements the tonic.slicers.Slicer protocol, meaning that it doesn’t have to inherit from it but implement all its methods.

  • metadata_path – filepath where slice metadata should be stored, so that it does not have to be recomputed the next time. If None, will be recomputed every time.

  • transform – Transforms to be applied on the data

  • target_transform – Transforms to be applied on the label/targets

  • transforms – A callable of transforms that is applied to both data and labels at the same time.

generate_metadata()[source]#

Slices every sample in the wrapped dataset and returns start and stop metadata for each slice.