Manuel Ligges earned his diploma and physics Ph.D. in 2009 from the University of Duisburg-Essen, where he continued to work as a senior researcher in light-matter interaction. In 2019, he joined the Fraunhofer Institute for Microelectronic circuits and Systems (IMS) in Duisburg as the Head of Optical Systems.
DVN: Manuel, thank you for talking with us. Fraunhofer is a well-known applied-research organization with 77 institutes worldwide, each covering specific technology areas. Can you tell us about the focus of Fraunhofer IMS?
Manuel Ligges: The Fraunhofer Institute for Microelectronic Circuits and Systems—IMS—is traditionally a ‘silicon institute’ with a strong background in CMOS technology, and also part of the Forschungsfabrik Mikroelektronik Deutschland (FMD), which is Europe’s largest R&D coöperative for nano- and microelectronics.
We operate in-house cleanroom facilities for CMOS and MEMS fabrication. A main focus of our research goes towards smart and embedded sensor systems, which includes the design and fabrication of Si-based sensor core technologies; their refinement by means of microsystems technology, but also smart sensor control and data processing (classical and embedded-AI based) and system level integration.
Exemplary developments include uncooled IR sensors, wireless transponder technologies; in-house RISC-V implementation AIRISC, and our platform-independent embedded AI-suit AlfES. The optical lidar sensors based on our in-house SPAD technology are part of this research and development focus. We are mainly but not exclusively targeting four business branches: health; industry; mobility, and space & security.
DVN: During your presentation at our last conference, you mentioned the NeurOSmart ‘lighthouse’ project, looking at data reduction methods important for future lidar systems. Tell us more, will you?
ML: In general, Fraunhofer Lighthouse projects aim at exploiting the synergies of different Fraunhofer Institutes to provide solutions for current challenges of the German Industry. In this particular Lighthouse Project with the Fraunhofer ISIT, IPMS, IWU and IAIS, we aim at the development of hybrid computing architectures for autonomous machines and transportation systems. The higher-order problem here is that advanced levels of autonomy will require more sensors, higher degrees of complexity in data fusion, corresponding computing power and energy consumption.
We are working on both, dedicated hardware and software solutions to address these issues. The data reduction method I presented is part of a lidar use case scenario where we aim at the implementation of an intelligent feedback loop based on a latency-reduced AI-based data analysis and a controllable sensor architecture, which will allow us to reduce the overall data stream and the power consumption of the total system.
We want to demonstrate this architecture in a human-robot interaction scenario, where violations of spatial ‘safety zones’ by humans are supposed to be detected with very low latency and high accuracy. For this purpose, we are in constant exchange with each other, harmonizing specifications and requirements as well as the current state of each actor’s core technology development and the overall project progress. The project funding is granted for a period of three years, in which we defined several milestones. Beside apparent frequent technical and organizational project meetings, there are also several higher-level status review meetings to track the overall progress of the project.
DVN: What specific research areas does IMS focus on in this collaboration?
ML: The Fraunhofer IMS has two major tasks within this project. At first, we are developing the overall optical lidar concept and system in close collaboration with the other Fraunhofer institutes. In this course we are also responsible for setting up the complete system. We will use a hybrid flash/scanning approach to first roughly capture the whole target scene. Subsequently we identify regions of interest (ROI), reconfigure the sensor and laser scanner, and then finally map the ROI with high precision.
Besides the development of the concept itself, the whole sensor architecture as well as the laser scanners have to be developed, adapted and precisely synchronized. As a second task, we are using our hardware/software co-development to quickly reduce the overall amount of data, as presented in my talk, and prepare the compressed data for subsequent object identification, which is required for the feedback loop mentioned above.
These processes are performed on an in-house developed RISC-V based multicore system, which will provide the infrastructure to operate digital neuromorphic AI accelerators. On the lidar side, we will profit from the novel flash/scanning hybrid approach, which can be used in a variety of different application scenarios. On the hardware/software side our developments are not tied to lidar applications, but adoptable to different use cases that involve other sensor concepts or architectures as well.
DVN: What are the main technology challenges you face?
ML: The optical concept for the NeurOSmart project use case is set, and we do not expect significant surprises. Challenges are mainly on the signal processing side, where we are aiming at a low latency AI-based data processing. For this purpose, we need to optimize the network structure and signal chains in great detail, while tailoring the underlying hardware platform towards the specific task. This hard-/software co-design is challenging, but greatly profits from the fact that we have competences on both fields in our institute and are embedded into an excellent consortium.
DVN: Fraunhofer uses the ATLAS emulator to accelerate the development of ADAS, avoiding numerous real-track tests. Can you take into account the effects of bad weather and the different types of lidar waveforms?
ML: First, we are working at solutions that mostly match our own core technology and system development which is mostly 905-nm laser based direct time-of-flight (dTOF). In this approach, it does not matter whether you use scanning or flash methods, as the emitted spatio-temporal structure of the lidar laser will be recognized by the system. For dTOF methods operating at other wavelengths, the concept can easily be adopted by using different laser and optical trigger technologies. The same concept also holds for indirect TOF methods which, however, are less frequently used in the context of ADAS.
The system is capable of simulating sunlight radiation as a ‘bad weather’ situation and we are also working on detailed concepts for the simulation of rain and fog. Effectively these perturbations act the same as sunlight as they generate temporally uncorrelated background photons that to not contribute to the (partially suppressed) real TOF signal. In terms of AMCW or FMCW lidar methods, the situation becomes immediately more complex as the phase, chirp and/or detailed temporal AM needs to be tracked and remodelled by the system. While we have general concepts for such solutions on the table, we are afraid that they need to be way more system-specific and such, are currently of less interest as a Fraunhofer core technology development.
DVN: Can the measurement screen emulate a point cloud?
ML: Yes, overall the concept starts with a first, single pixel screen that is capable of simulating almost arbitrary distances at one point in space. The main idea is to subsequently reduce the complexity of this single pixel approach in a way that the concept becomes scalable at reasonable costs and effort. We have to determine the total size and number of pixels of such a screen depending on the needs of potential partners, customers and the industry.
DVN: Can your technology help reduce lidar power draw?
ML: Our total system is aiming at the generation of the point cloud as well as an object and human identification scheme without further needs of data processing. These are tasks which are often handled separately. The main driver of power consumption is the latter part which requires a lot of computational power but is absolutely necessary for the evaluation of the point cloud in ADAS. For the overall system, we estimated a reduction in power consumption of roughly 50 per cent.
DVN: Will the advanced technology you are developing be equally applicable and needed for short-, mid- and long-range lidar sensors?
ML: SPAD-based sensors are in particular outstanding in terms of their scalability which makes them the ideal candidate to realize sensor arrays. In this sense, our main research focus is based on flash systems, which are nowadays most suitable for short- to mid-range applications as the total power of currently available low-cost laser sources is limited and not yet constrained by eye-safety regulations. As mentioned above, we are also currently exploiting hybrid methods, where we try to combine the best of both worlds and aim at higher ranges.
DVN: When do you expect first road tests with your technology? What will be the further schedule?
ML: Our main focus is the core technology development rather than the design of full product-like lidar systems. In this regard we are currently mainly focused on foresighted topics like sensor fusion, edge computing, but also novel concepts for system integration.
One example is the development of a ‘smart headlight’ within a Fraunhofer consortium. Here, we combine lidar; radar, and conventional lightning in a co-axial geometry using special combiner optics which will allow for easy system integration while maintaining automaker form factor restrictions. Off course we are also working on further improvements of our SPAD sensor technology in terms of sensitivity, timing behavior and advanced read-out circuitry.