We have now officially launched the Reeds dataset! The world’s largest dataset for benchmarking perception algorithms for robots and vehicles. The idea for this dataset came a few years back when noting that new perception algorithms often showed very similar results, for example towards Kitti, to the ones previously published. Then, when the Waymo set was released back in 2019 we got even more committed to do something, as the new dataset to a large extent contained similar data as the very old Kitti set. We wanted to bring something more interesting to the research community. To not just do the same things over and over, but rather look for more difficult and varied problems, and more honest and interesting benchmarks.
From our activities in the Chalmers Revere lab since 2015 we had gathered quite much experience and software, like our own libcluon and OpenDLV, and the work with creating autonomous machines seemed to contain many of the same problems as creating datasets. Road-vehicles were also getting a bit old, so why not start getting some data from our floating friends? We recently started a good collaboration with colleagues at Research Institutes of Sweden (RISE) and there we happened to have a quite perfect boat involved. So, together with RISE, University of Gothenburg, and the Swedish Maritime Administration we applied for funding from the Swedish Transport Administration and we luckily got the funds to continue.
Now, two years later, we are finally reaching a point when we can start sharing our work and data. Please, if you are interested in joining this journey, to see how the world’s largest dataset for benchmarking robot perception, then please sign up and join us for an exciting future!