The Artificial Vision and Intelligent Systems Laboratory (VisLab) of Italy’s Parma University has built itself a fine pedigree in basic and applied research which has developed machine vision algorithms and intelligent systems for the automotive field. In 1998, a VisLab-equipped Lancia Thema named ‘Argo’ travelled along the famous Mille Miglia race route and completed 98 per cent of it autonomously using then-current technology. In 2005, VisLab provided the vision element of the Terramax, a collaborative un
The Artificial Vision and Intelligent Systems Laboratory (7085 VisLab) of Italy’s Parma University has built itself a fine pedigree in basic and applied research which has developed machine vision algorithms and intelligent systems for the automotive field.
In 1998, a VisLab-equipped Lancia Thema named ‘Argo’ travelled along the famous Mille Miglia race route and completed 98 per cent of it autonomously using then-current technology. In 2005, VisLab provided the vision element of the Terramax, a collaborative undertaking with Oshkosh Truck Corp. and Rockwell Collins, which was one of five vehicles to complete the DARPA Grand Challenge course for autonomous vehicles. Subsequently, it has also taken part in the DARPA Urban Challenge, which was intended to prove the ability of self-piloting vehicles to operate safely in built-up areas, and in 2010 it set out on its own 100-day challenge to drive four electric Piaggio mini-vans from Parma to the European Pavilion at the Shanghai International Expo.
Other previous work has included the development of a vision-based pedestrian detection system for VW Group; a project for the US Army using a quad (dual-stereo) camera solution to identify humans in off-road situations; and development to the pre-production stage of road sign recognition technology on behalf of6709 Magneti Marelli.
VisLab is currently transitioning from being a research into a commercial organisation developing and marketing its own products, according to Project Manager Paolo Grisleri. Principle applications of machine vision in the transport sector depend on timespan, he says.
“In the next few years, we’ll see some very powerful and direct applications in traffic monitoring and control. We already see lots of deployments of ANPR and traffic monitoring and control solutions. We’re already seeing lane-detection and -keeping assistants and in the longer term, we’ll see even more Advanced Driver Assistance System [ADAS] solutions and moves to replace the driver.
“I don’t see any obstacle to ADAS adoption beyond price. Lane and pedestrian detection as well as road sign recognition using forward-looking cameras will become common even on inexpensive vehicles. Autonomous vehicle solutions still face some legal hurdles. They also need to become more robust, not least because they will have to operate in the same environments as conventional vehicles. On our trip to Shanghai, our autonomous systems worked perfectly well on uncongested inter-urban roads. However, when we reached Moscow, for instance, there were instances where, although there were four lanes marked on carriageways, local drivers were acting illegally and effectively forming five lanes. That causes problems for autonomous systems, which still have to grow to account for humans’ unexpected decisions and actions.”
Dedicated autonomous vehicle lanes might be a solution, he notes, which suggests the road freight sector as an early technology adopter. Although the DARPA events showed that the winning technology was laser-based, Grisleri sees major challenges to its more widespread adoption. Some of these are rather less concerned with the technology.
Grisleri: “Vision is much easier to integrate into the vehicle relatively unobtrusively whereas laser needs a large, roof-mounted housing, something which both vehicle manufacturers and buyers will object to aesthetically. Laser also has problems with precipitation – in dense rain or fog it will ‘see’ things which aren’t there. By comparison, vision works in all but the severest of conditions. Vision also has direct classification capabilities that can be exploited for extracting more accurate information on the vehicle surroundings.
“There are still issues with vision-based systems, such as how to reconcile wide-area optics with the need to look at things closer in more detail. But as vision is relatively inexpensive it’s possible to use two systems, one for the wide-area view and another high-precision system for ranges out to 50-80m.
“The on-vehicle environment is critical in some situations. Going from a situation where the sun is shining directly into a camera’s lens to suddenly being in a tunnel requires a very high dynamic range in order to cope, for instance. There are emerging CMOS-based sensors, such as that from NIT which won the VISION Award this year, which can cope with that and have already been demonstrated in stereo camera systems. Production examples are probably no more than a couple of years away.
“The key to user acceptance is going to be true integration of the optical and processing capabilities – the optical part’s already there but we need solutions which can process at adequate frame rates. Again, though, I’d say they’re no more than a couple of years away.”
In 1998, a VisLab-equipped Lancia Thema named ‘Argo’ travelled along the famous Mille Miglia race route and completed 98 per cent of it autonomously using then-current technology. In 2005, VisLab provided the vision element of the Terramax, a collaborative undertaking with Oshkosh Truck Corp. and Rockwell Collins, which was one of five vehicles to complete the DARPA Grand Challenge course for autonomous vehicles. Subsequently, it has also taken part in the DARPA Urban Challenge, which was intended to prove the ability of self-piloting vehicles to operate safely in built-up areas, and in 2010 it set out on its own 100-day challenge to drive four electric Piaggio mini-vans from Parma to the European Pavilion at the Shanghai International Expo.
Other previous work has included the development of a vision-based pedestrian detection system for VW Group; a project for the US Army using a quad (dual-stereo) camera solution to identify humans in off-road situations; and development to the pre-production stage of road sign recognition technology on behalf of
VisLab is currently transitioning from being a research into a commercial organisation developing and marketing its own products, according to Project Manager Paolo Grisleri. Principle applications of machine vision in the transport sector depend on timespan, he says.
“In the next few years, we’ll see some very powerful and direct applications in traffic monitoring and control. We already see lots of deployments of ANPR and traffic monitoring and control solutions. We’re already seeing lane-detection and -keeping assistants and in the longer term, we’ll see even more Advanced Driver Assistance System [ADAS] solutions and moves to replace the driver.
“I don’t see any obstacle to ADAS adoption beyond price. Lane and pedestrian detection as well as road sign recognition using forward-looking cameras will become common even on inexpensive vehicles. Autonomous vehicle solutions still face some legal hurdles. They also need to become more robust, not least because they will have to operate in the same environments as conventional vehicles. On our trip to Shanghai, our autonomous systems worked perfectly well on uncongested inter-urban roads. However, when we reached Moscow, for instance, there were instances where, although there were four lanes marked on carriageways, local drivers were acting illegally and effectively forming five lanes. That causes problems for autonomous systems, which still have to grow to account for humans’ unexpected decisions and actions.”
Dedicated autonomous vehicle lanes might be a solution, he notes, which suggests the road freight sector as an early technology adopter. Although the DARPA events showed that the winning technology was laser-based, Grisleri sees major challenges to its more widespread adoption. Some of these are rather less concerned with the technology.
Grisleri: “Vision is much easier to integrate into the vehicle relatively unobtrusively whereas laser needs a large, roof-mounted housing, something which both vehicle manufacturers and buyers will object to aesthetically. Laser also has problems with precipitation – in dense rain or fog it will ‘see’ things which aren’t there. By comparison, vision works in all but the severest of conditions. Vision also has direct classification capabilities that can be exploited for extracting more accurate information on the vehicle surroundings.
“There are still issues with vision-based systems, such as how to reconcile wide-area optics with the need to look at things closer in more detail. But as vision is relatively inexpensive it’s possible to use two systems, one for the wide-area view and another high-precision system for ranges out to 50-80m.
“The on-vehicle environment is critical in some situations. Going from a situation where the sun is shining directly into a camera’s lens to suddenly being in a tunnel requires a very high dynamic range in order to cope, for instance. There are emerging CMOS-based sensors, such as that from NIT which won the VISION Award this year, which can cope with that and have already been demonstrated in stereo camera systems. Production examples are probably no more than a couple of years away.
“The key to user acceptance is going to be true integration of the optical and processing capabilities – the optical part’s already there but we need solutions which can process at adequate frame rates. Again, though, I’d say they’re no more than a couple of years away.”