tonic.prototype.datasets.prophesee#

Module Contents#

Classes#

AutomotiveDetectionFileReader

Iterable-style DataPipe.

AutomotiveDetectionBaseClass

Base class for Automotive Detection datasets.

Gen1AutomotiveDetection

`Gen1 Automotive Detection Dataset <https://www.prophesee.ai/2020/01/24/prophesee-

Gen4AutomotiveDetectionMini

`Gen4 Automotive Detection <https://www.prophesee.ai/2020/11/24/automotive-megapixel-event-

Gen4Automotive

`Gen4 Automotive Detection <https://www.prophesee.ai/2020/11/24/automotive-megapixel-event-

Gen4Downsampled

Iterable-style DataPipe.

class tonic.prototype.datasets.prophesee.AutomotiveDetectionFileReader(dp: torchdata.datapipes.iter.IterDataPipe[str])[source]#

Bases: torchdata.datapipes.iter.IterDataPipe[tonic.prototype.datasets.utils._dataset.Sample]

Iterable-style DataPipe.

All DataPipes that represent an iterable of data samples should subclass this. This style of DataPipes is particularly useful when data come from a stream, or when the number of samples is too large to fit them all in memory. IterDataPipe is lazily initialized and its elements are computed only when next() is called on the iterator of an IterDataPipe.

All subclasses should overwrite __iter__(), which would return an iterator of samples in this DataPipe. Calling __iter__ of an IterDataPipe automatically invokes its method reset(), which by default performs no operation. When writing a custom IterDataPipe, users should override reset() if necessary. The common usages include resetting buffers, pointers, and various state variables within the custom IterDataPipe.

Note

Only one iterator can be valid for each IterDataPipe at a time, and the creation a second iterator will invalidate the first one. This constraint is necessary because some IterDataPipe have internal buffers, whose states can become invalid if there are multiple iterators. The code example below presents details on how this constraint looks in practice. If you have any feedback related to this constraint, please see GitHub IterDataPipe Single Iterator Issue.

These DataPipes can be invoked in two ways, using the class constructor or applying their functional form onto an existing IterDataPipe (recommended, available to most but not all DataPipes). You can chain multiple IterDataPipe together to form a pipeline that will perform multiple operations in succession.

Note

When a subclass is used with DataLoader, each item in the DataPipe will be yielded from the DataLoader iterator. When num_workers > 0, each worker process will have a different copy of the DataPipe object, so it is often desired to configure each copy independently to avoid having duplicate data returned from the workers. get_worker_info(), when called in a worker process, returns information about the worker. It can be used in either the dataset’s __iter__() method or the DataLoader ‘s worker_init_fn option to modify each copy’s behavior.

Examples

General Usage:
>>> # xdoctest: +SKIP
>>> from torchdata.datapipes.iter import IterableWrapper, Mapper
>>> dp = IterableWrapper(range(10))
>>> map_dp_1 = Mapper(dp, lambda x: x + 1)  # Using class constructor
>>> map_dp_2 = dp.map(lambda x: x + 1)  # Using functional form (recommended)
>>> list(map_dp_1)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> list(map_dp_2)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> filter_dp = map_dp_1.filter(lambda x: x % 2 == 0)
>>> list(filter_dp)
[2, 4, 6, 8, 10]
Single Iterator Constraint Example:
>>> from torchdata.datapipes.iter import IterableWrapper, Mapper
>>> source_dp = IterableWrapper(range(10))
>>> it1 = iter(source_dp)
>>> list(it1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> it1 = iter(source_dp)
>>> it2 = iter(source_dp)  # The creation of a new iterator invalidates `it1`
>>> next(it2)
0
>>> next(it1)  # Further usage of `it1` will raise a `RunTimeError`
Parameters:

dp (torchdata.datapipes.iter.IterDataPipe[str]) –

__iter__() Iterator[tonic.prototype.datasets.utils._dataset.Sample][source]#
Return type:

Iterator[tonic.prototype.datasets.utils._dataset.Sample]

class tonic.prototype.datasets.prophesee.AutomotiveDetectionBaseClass(root: os.PathLike, split: str = 'train', skip_sha256_check: Optional[bool] = True, shuffle=False)[source]#

Bases: tonic.prototype.datasets.utils._dataset.Dataset

Base class for Automotive Detection datasets.

Parameters:
  • root (os.PathLike) –

  • split (str) –

  • skip_sha256_check (Optional[bool]) –

class tonic.prototype.datasets.prophesee.Gen1AutomotiveDetection(root: os.PathLike, split: str = 'train', shuffle: bool = False)[source]#

Bases: AutomotiveDetectionBaseClass

Gen1 Automotive Detection Dataset

To download the data, you’ll need to agree to Prophesee’s Terms and Conditions.

Then, the steps to acquire the data can be as follows:

Download the torrent file for the dataset:

wget https://dataset.prophesee.ai/index.php/s/uE0QGLaFAEQnPwy/download\?path\=%2F\&files\=ATIS%20Automotive%20Detection%20Dataset.torrent
-O Gen1Prophesee.torrent

Download the data using peer-to-peer connections. On Linux this can be done using aria2c on the command line:

aria2c Gen1Prophesee.torrent

This will download several 7z archives for training and testing. We’ll need to unpack them manually by looping over the 7z files and feeding them to 7z:

sudo apt-get install p7zip-full
for i in *.7z; do 7z x $i; done
Parameters:
  • root (string) – Location to decompressed archive.

  • split (str) – Can be ‘train’ (default), ‘valid’ or ‘test’.

  • shuffle (bool) – If True, the dataset will be shuffled randomly.

sensor_size#
class_map#
__len__() int[source]#

This should return the number of samples in the dataset.

If available, also the division among train and test.

Return type:

int

class tonic.prototype.datasets.prophesee.Gen4AutomotiveDetectionMini(root: os.PathLike, split: str = 'train', shuffle: bool = False, skip_sha256_check: Optional[bool] = True)[source]#

Bases: AutomotiveDetectionBaseClass

Gen4 Automotive Detection

This datasets needs ‘expelliarmus’ installed on the system. Events have “txyp” ordering.:

@article{de2020large,
  title={A large scale event-based detection dataset for automotive},
  author={De Tournemire, Pierre and Nitti, Davide and Perot, Etienne and Migliore, Davide and Sironi, Amos},
  journal={arXiv preprint arXiv:2001.08499},
  year={2020}
}

Note

The hosting server is very flaky and often interrupts the download before it is completed. If you end up with anything smaller than 23GB on disk, delete and try again.

Parameters:
  • root (string) – Location to decompressed archive.

  • split (str) – Can be ‘train’ (default), ‘valid’ or ‘test’.

  • shuffle (bool) – If True, the dataset will be shuffled randomly.

  • skip_sha256_check (Optional[bool]) –

sensor_size#
class_map#
__len__() int[source]#

This should return the number of samples in the dataset.

If available, also the division among train and test.

Return type:

int

class tonic.prototype.datasets.prophesee.Gen4Automotive(root: os.PathLike, split: str = 'train', shuffle: bool = False)[source]#

Bases: AutomotiveDetectionBaseClass

Gen4 Automotive Detection

This datasets needs ‘expelliarmus’ installed on the system. Events have “txyp” ordering.:

@article{de2020large,
  title={A large scale event-based detection dataset for automotive},
  author={De Tournemire, Pierre and Nitti, Davide and Perot, Etienne and Migliore, Davide and Sironi, Amos},
  journal={arXiv preprint arXiv:2001.08499},
  year={2020}
}

To download the data, you’ll need to agree to Prophesee’s Terms and Conditions.

Then, the steps to acquire the data can be as follows:

Download the torrent file for the dataset:

wget https://dataset.prophesee.ai/index.php/s/8HY0Bv4mOU4RzBm/download?path=%2F&files=Large_Automotive_Detection_Dataset.torrent
-O Gen4Prophesee.torrent

Download the data using peer-to-peer connections. On Linux this can be done using aria2c on the command line:

aria2c Gen4Prophesee.torrent

This will download several 7z archives for training, validation and testing. We’ll need to unpack them manually by looping over the 7z files and feeding them to 7z:

sudo apt-get install p7zip-full
for i in *.7z; do 7z x $i; done
Parameters:
  • root (string) – Location to decompressed archive.

  • split (str) – Can be ‘train’ (default), ‘valid’ or ‘test’.

  • shuffle (bool) – If True, the dataset will be shuffled randomly.

sensor_size#
class_map#
__len__() int[source]#

This should return the number of samples in the dataset.

If available, also the division among train and test.

Return type:

int

class tonic.prototype.datasets.prophesee.Gen4Downsampled(root: os.PathLike, split: str = 'train', skip_sha256_check: Optional[bool] = True, shuffle=False)[source]#

Bases: tonic.prototype.datasets.utils._dataset.Dataset

Iterable-style DataPipe.

All DataPipes that represent an iterable of data samples should subclass this. This style of DataPipes is particularly useful when data come from a stream, or when the number of samples is too large to fit them all in memory. IterDataPipe is lazily initialized and its elements are computed only when next() is called on the iterator of an IterDataPipe.

All subclasses should overwrite __iter__(), which would return an iterator of samples in this DataPipe. Calling __iter__ of an IterDataPipe automatically invokes its method reset(), which by default performs no operation. When writing a custom IterDataPipe, users should override reset() if necessary. The common usages include resetting buffers, pointers, and various state variables within the custom IterDataPipe.

Note

Only one iterator can be valid for each IterDataPipe at a time, and the creation a second iterator will invalidate the first one. This constraint is necessary because some IterDataPipe have internal buffers, whose states can become invalid if there are multiple iterators. The code example below presents details on how this constraint looks in practice. If you have any feedback related to this constraint, please see GitHub IterDataPipe Single Iterator Issue.

These DataPipes can be invoked in two ways, using the class constructor or applying their functional form onto an existing IterDataPipe (recommended, available to most but not all DataPipes). You can chain multiple IterDataPipe together to form a pipeline that will perform multiple operations in succession.

Note

When a subclass is used with DataLoader, each item in the DataPipe will be yielded from the DataLoader iterator. When num_workers > 0, each worker process will have a different copy of the DataPipe object, so it is often desired to configure each copy independently to avoid having duplicate data returned from the workers. get_worker_info(), when called in a worker process, returns information about the worker. It can be used in either the dataset’s __iter__() method or the DataLoader ‘s worker_init_fn option to modify each copy’s behavior.

Examples

General Usage:
>>> # xdoctest: +SKIP
>>> from torchdata.datapipes.iter import IterableWrapper, Mapper
>>> dp = IterableWrapper(range(10))
>>> map_dp_1 = Mapper(dp, lambda x: x + 1)  # Using class constructor
>>> map_dp_2 = dp.map(lambda x: x + 1)  # Using functional form (recommended)
>>> list(map_dp_1)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> list(map_dp_2)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> filter_dp = map_dp_1.filter(lambda x: x % 2 == 0)
>>> list(filter_dp)
[2, 4, 6, 8, 10]
Single Iterator Constraint Example:
>>> from torchdata.datapipes.iter import IterableWrapper, Mapper
>>> source_dp = IterableWrapper(range(10))
>>> it1 = iter(source_dp)
>>> list(it1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> it1 = iter(source_dp)
>>> it2 = iter(source_dp)  # The creation of a new iterator invalidates `it1`
>>> next(it2)
0
>>> next(it1)  # Further usage of `it1` will raise a `RunTimeError`
Parameters:
  • root (os.PathLike) –

  • split (str) –

  • skip_sha256_check (Optional[bool]) –

__len__() int[source]#

This should return the number of samples in the dataset.

If available, also the division among train and test.

Return type:

int