Our LiDAR sensor streams fascinating 3D point clouds that allow a host of innovative applications. However, for some applications, 3D point cloud processing can be too expensive.
What if we could represent our data as 2D image planes and use less expensive image processing algorithms on them? How do convolutional neural networks perform on this type of data?
Can we embed meaningful algorithm processing features in our device? Join our team of engineers and prove that you are able to push the boundaries of what is possible!
Implement a FPGA module to stream depth images
Record depth images for algorithm development
Evaluate different algorithms for denoising, undistortion and point cloud generation
Evaluate approaches for machine learning based feature detection, e.g. CNN
Support marketing to prepare impressive demos
Support patent applications for your ideas
You are currently enrolled in a master course majoring in computer science or electrical engineering (or similar)
You have experience with embedded hardware, especially FPGA
You have relevant programming skills in VHDL, Python, and C / C++
You have experience with image processing
You have experience with machine learning frameworks is a plus
Ideally, you have first experience with (embedded) Linux
You have very good communication skills in English or German