Apple researchers have released a paper about a "trainable deep architecture", setting out the fruity firm's plans to make autonomous vehicles better at detecting cyclists and pedestrians.
The paper, jointly authored by Apple researchers Yin Zhou and Oncel Tuzel, details a system the pair call Voxelnet. A voxel is a point on a 3D grid.
The Voxelnet proposal would, say the Apple twosome, divide "a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer".
The paper goes on to claim, unsurprisingly, that Apple's practical tests of its own new system have outperformed existing "LIDAR-based 3D detection methods by a large margin".
LIDAR-based sensor suites are a standard fit nowadays for self-driving cars and existing road vehicles modified to serve as driverless car testbeds. Apple's proposal effectively involves putting its own software suite on the end of the LIDAR sensor itself, which it claims greatly increases its effectiveness.
Apple is notorious for keeping its technology advances largely under wraps. In terms of autonomous vehicles, the fruity firm did scoop a permit for testing self-driving vehicles in California, USA, back in April. However, a year ago reports were gathering thick and fast that its "Project Titan" car project was grinding to a halt.
The Voxelnet paper can be read on academic repository Arxiv here. ®