Perceptions of machine vision being an expensive solution are being challenged by developments in both core technologies and ancillaries. Here, Jason Barnes and David Crawford look at the latest developments in the sector. A notable aspect of machine vision is the flexibility it offers in terms of how and how much data is passed around a network. With smart cameras, processing capabilities at the front end mean that only that which is valid need be communicated back to a central processor of any descripti
Perceptions of machine vision being an expensive solution are being challenged by developments in both core technologies and ancillaries. Here, Jason Barnes and David Crawford look at the latest developments in the sector
A notable aspect of machine vision is the flexibility it offers in terms of how and how much data is passed around a network. With smart cameras, processing capabilities at the front end mean that only that which is valid need be communicated back to a central processor of any description. That has direct effects on the amount of bandwidth needed and is just one aspect of the technology which has allowed implementers of machine vision to make purchasing and operational decisions which play off individual cameras’ costs against those of data transfer, and in doing so address assertions that the technology is expensive.
‘Expensive’ is in any case a term wide open to interpretation, says
“Smart camera technology can be used for many different applications, from the simplest such as data collection and travel time information provision up to the more complicated such as enforcement of driver discipline. Especially in cases such as the latter it can offer a very swift return on investment.”
Flexibility and adaptability are big themes, he continues, with the ability to support multiple applications from a single device being readily embraced in developing countries – something which he feels that developed countries should look at and learn from. “The applications set will only grow and smart camera technology will continue to make its presence felt in the market.
Not only does it save money in operation but it reduces cost at the installation stage.”
A brief lesson in history
To gain an impression of just where machine vision has progressed to already, it’s necessary to look back, according to“Now, video detection is an accepted detection standard, and Aldis and its competitors would expect to outfit a four-approach intersection for $15,000 or less. The first-ever market analysis for video detection dates back to 1990 and found that the price point to get to in order to replace loops was around $10,000. We’re there now – or very close.
“Today, like 21 years ago, the installation of a single loop plus amplifier electronics costs about $2,000. But go to a location with extremes of climate and longevity is a problem. That loop will last maybe six months in Riyadh. Southern California expects to get more than four years from a preformed loop set in concrete. That’s $2-3,000 on repair or replacement every few years, plus the costs associated with lane closures and police overtime.
“That gives rise to the issue of lifecycle costing. Too often it’s not considered, with capital and operational budgets being administered by different individuals. One guy is going after the lowest possible installed price while the other is trying to retain as large a budget as possible.”
That situation is improving, however Sowell remains unconvinced that the value proposition of machine vision is yet fully understood.
“Traffic engineers have one thing in common: they want to clone themselves and stand on every street corner so that they can verify in real time what’s going on. They may not be concerned with data collection but affiliated groups might be. The police, for instance, want to quantify incidences of violations so that they can target countermeasures with greater effect.
“We still have a lot of organisational stovepipes, a reluctance to share data and a situation where the capabilities of machine vision are not being effectively utilised. For instance, those same police personnel might deploy surveillance cameras which in truth are redundant because a machine vision system can do the same job perfectly well. Education and awareness would result in the sharing of purchasing and benefits.”
Looking back, in light of over two decades of experience, it might seem that something has gone awry but Sowell doesn’t think so.
“Nothing’s gone ‘wrong’, as such. Machine vision is enjoying organic growth of more than 10 per cent per year, either as new or retrofit. More and more people are seeing its benefits for applications such as automated incident detection. We’re talking about a traffic-related market worth $300 million a year.”
Steady progress
In many respects Sowell sees machine vision as a well-defined and established market, even if it has become something of a buzz term. The conservatism of traffic engineers of two decades ago has been replaced by the technology-savvy outlook of computer boomers, with the result that sales cycles have shortened and tenders invite solutions which propose loops and/or type-approved video-based systems.“There’s no debate over whether machine vision is acceptable in terms of accuracy; it is. But we need greater understanding of the full feature set,” he notes.
Over the coming years, the cost of implementing machine vision will continue to decline, he adds.
“Already, it’s pretty much a trade-out but prices will fall as installations increase. Machine vision actually saves money but I’d like to see better appreciation of lifecycle costing and inter-agency cooperation.”
Supporting such opinions is the fact that machine vision is a very fast-developing sector. What follows is a comprehensive
look at factors which are driving the technology’s increasing utility, as well as some comment on where applications are heading.
Standards: an un-ending evolutionary process
Innovations such as GigE Vision have done much to standardise the high-speed transmission of data over Ethernet networks but there is still much to be excited about where standards are concerned.USB 3.0, the latest revision of the ubiquitous standard for computer connectivity, increases even further the flexibility which machine vision offers, says
“Machine vision is in many respects all about the camera plus the network, and there’s a lot of use of standard IT and consumer interfaces. GigE Vision, for example, is really only a combination of Gigabit Ethernet with additional machine vision protocols.
There are already USB 3.0 products on the market but we’re going to see the same happen with USB 3.0 as happened with high-capacity Ethernet: the emerging USB Vision standard will use core USB 3.0 technologies but again will add protocols for machine vision. What you’ll get is guaranteed performance and, because it’s being developed by the same people who developed GigE Vision, the standards will complement and not conflict. We can expect to see something formalised by the end of this year.”
In terms of application, that is really quite exciting. For example, in a situation where more pixels or 360o coverage are needed, 20 to 30 relatively low-resolution cameras could be arranged on a USB 3.0 bus whereas once over the only way to achieve the same result would be to use, say, an expensive 10MP camera.
“That means you can use low-cost cameras and lenses but still achieve the massive amounts of data you might want or need,” Hearn states. “The massive bandwidths also mean that you can run something like a 4MP camera at frame rates in the hundreds per second range, should you need to.”
Dutch company
These bring their own issues for applications in both sectors. For example, systems have to operate in harsher external environments than many standard machine vision products. And while a data rate of 60fps offers critical advantages for high-speed highway traffic applications, the resulting data stream is greater than either analogue or GigE Vision interfaces can handle.
Adimec has therefore developed and introduced, as a member of the CoaXPress consortium, a new digital interface, CoaXPress. This became a standard in the first quarter of 2011 and is being hosted by the Japan Industrial Imaging Association (JIIA). It can transmit 6.25Gb/s over a single standard coax cable with a cable length of up to 144m. Adimec’s Joost van Kuijk sees the CoaXPress approach bringing overall system cost reductions from the scope for simplified integration and reduced cabling (with power, I/O and commands over a single cable), re-use of existing analogue cabling, and (since one 2MP unit can replace two or more standard-resolution ones) reductions in numbers of cameras.
Consumer versus commercial versus custom…
As in any market, buyers get what they pay for, saysresulted in standardised, commoditised machine vision solutions which are perhaps 40 per cent cheaper than even three years ago. Nevertheless, the ITS sector’s price reference point for camera systems is still lower and customers – in this case, those who sell on to the end-user – continue to look to drive costs down even further.
There has been a very aggressive push on the individual component side of systems, he notes. The ITS sector in particular, as distinct from the manufacturing processing industry in which machine vision has its roots, has been shopping for lenses at the commodity end of the market.
“The lens is the most costly component of a machine vision system,” he says. “A typical lens for machine vision applications might cost around US$2,000, whereas consumer-grade Canon or Nikon ef or ef-s lenses cost about a fifth of that. We’re seeing ITS users who want to leverage the cost difference which the high-volume SLR camera lens market offers, or else leverage the full SLR camera itself and adapt it for machine vision applications.”
Although that has a positive – for the buyer – effect on up-front purchase costs, there are pitfalls. Chief among them is that such solutions are consumer- and not industrial-grade. They don’t, therefore, have the robustness of operation across such a wide temperature range and the data flow rates are inadequate for very high-end applications. There is also the high turnover of products in the consumer electronics sector to contend with.
Romero sees such solutions as being adequate – but no more – for portable and mobile applications. They are unable to stand up to the rigours of permanent installation at the roadside. He anticipates that any such solution has a lifespan of 18-24 months, whereas robustness and long-term stability of both the product and its outputs remain focuses for many in the machine vision sector.
“Some customers have been very happy with SLR solutions but they tend to be smaller players looking to compete on a price basis with the bigger players. The solutions have turned out to be cheaper in every sense of the word. Some of the bigger players have also gone down the same path but are now looking to engineer their way out of the situations they’ve found themselves in,” he adds. “I see such solutions having a continuing but decreasing market share over the next couple of years.”
Another way to reduce cost is to go for customised or semi-customised solutions. For certain customers, Teledyne DALSA has taken general-purpose machine vision cameras and then, quite literally, shaved off the un-needed functionalities. This reduces cost and results in a very rigid solution but is a way for system suppliers to be able to provide machine vision performance at less than the ‘full’ price.
“This is in many ways the most usable solution, as you’re not having to front up US$0.5 million for design,” Romero explains. “We use a standard product as a reference and then customise it according to need. We’re seeing more and more solutions like that, not least because a standard product allows customers to trial before they buy.”
A semi-customised product still needs a certain volume to be viable. The exact number depends on the camera and component manufacturer but the numbers can be relatively low – around the 100-150 mark, which makes it a viable course for integrators in the ITS field.
“A general mistake is to try and decouple all components and make each reach a target price. You can’t do that and expect success,” Romero adds. “At the end of the day, customers need to decide what they want a camera to do. If they want the front end to achieve a certain price point, then they have to look at how much processing needs to take place there.
“Take image compression: do you do that in the camera or on the PC? A 12MP image in a good dynamic range needs 12Mb per image just for monochrome. If you want colour on a PC, that’s 24Mb per colour plane transferred. If you need to transfer a lot of images within a short space of time, you might want to compress in the camera instead. Whether something needs to be done in real time or not is a big factor but the semi-customised option represents the best cost-conscious choice.”
Adimec’s Joost van Kuijk also highlights the trend. Like Romero he notes that there is the choice between custom and off-the-shelf solutions. The immediate reaction to the term ‘customisation’ tends to be that it will automatically result in a more expensive product but van Kujik believes that careful analysis of the total cost of ownership can show the opposite to be true. He suggests that existing system redesigns, workarounds and software adjustments to compensate for inadequate original quality and functionality are often inefficient in practice and costly in the end.
“Build-it-yourself may sound good at first, since it gives total control over the product but if the need is for leading-edge technology then working with a camera specialist can save huge amounts of time by eliminating the learning curve.
“Using an optimised camera from the start allows more focus on core technologies and eliminates the need for additional design work. Given lengthy product lifecycles in users’ industries, custom cameras also provide better continuity and obsolescence management.”
CCD and CMOS: the old and the (increasingly competent) new
Cost is also a contributory factor in the battle for supremacy between long-established Charge-Coupled Device (CCD) and newer Complementary Metal Oxide Semiconductor (CMOS)-based cameras. The latter are already showing cost benefits in consumer applications and the effect is now coming through in the industrial machine vision sector.In the past, CCDs have been considered superior because of their quality, notably in offering a higher dynamic range and resolution. But CCD frame rates are slower, they require more power dissipation, and they cost more to manufacture and integrate.
Recently, CMOS sensors have shown significant improvements, notably higher speed performance, better integration potential, lower power needs, and sensor resolutions and data qualities which are approaching those of CCDs.
CMOS sensors’ development speed is faster, since they are produced in non-dedicated fabrication laboratories, allowing prototypes to be progressed and incorporated into cameras more quickly. CMOS solutions are already very viable alternatives and are likely to outperform their CCD rivals in two or three years – initially in high-speed and later in lower-speed applications, according to Adimec’s Joost van Kuijk. Even in just the last few months we have seen some significant developments in CMOS-based technology, says Stemmer’s Steve Hearn. Like van Kujik, he notes that high dynamic ranges are combined withgreater ease of manufacture: “With CMOS it is, for instance, possible to have all the necessary capabilities including processing on a single chip, whereas with a CCD-based solution a sensor set and then add-on processing is the norm. The effect with CMOS, therefore, is an overall downward pressure on unit price – a trend which I expect to continue.”
Making light of things
Stepping away from the camera slightly, evolutions in lighting are worth noting, says Teledyne DALSA’s Manny Romero: “Developments in LED-based white and infrared light sources are lifting the pressure on the machine vision end of the solution. We’re seeing newer, more powerful light sources becoming available at a decreasing cost. More capable light sources mean that the lighting systems can either be farther away from the camera itself, or else the acquisition process is faster.”Stemmer’s Steve Hearn notes that with some enforcement applications processing cannot be done on a single image. For example, an image which shows a vehicle will saturate on the license plate. That results in a need for two cameras: a monochrome Automatic Number Plate Recognition (ANPR) camera and then a second (often colour) to provide contextual information. However, multi-pulse lighting systems allow two images of different intensities to be processed of the same vehicle; with machine vision, the ability to decide frame by frame and at electronic speeds which images are required means it is possible to use just one camera.
“You can use IR flash to make an ANPR capture and then just a fraction of a second later trigger another flash at a different intensity to get the overview image,” he says. “The key is having a high enough frame rate but you don’t just remove the need for one of the cameras; you also potentially remove the need for an external trigger as the trigger itself can be vision-based.
“That has associated installation and maintenance savings. There are many other applications where the inherent intelligence of
machine vision can have the same effect.”
LED technology manufacturer
machine vision applications right. A driver of that strategy has been a realisation that simply picking machine vision out of applications in manufacturing processing and dropping it into ITS will not work, says the company’s Peter Bhagat.
“That we’ve been very successful in machine vision applications elsewhere doesn’t matter. We’ve still had to take our core technologies and adapt them more closely to the applications and requirements specific to ITS,” he states.
There are a number of areas where LED lighting for vision applications has and will continue to influence innovation in ITS. Multi-pulse LED lighting systems which enable cameras to take several images with different pulse lengths and software which automatically chooses the best one(s) for the application are a departure from the convention, where only two images are taken and then compared. Generally, processing requirements are reduced if the lighting conditions are better and Gardasoft provides solutions which can provide feedback from the camera to the lighting system in real time via Ethernet or RS-232 and take account of time of day and prevailing conditions.
“Intelligent, dynamically controlled lighting reduces complexity and cost by, for example, reducing the need for auto-iris functions,” Bhagat explains.
Echoing Hearn’s sentiments, he adds that LEDs score over xenon-based technology because they allow faster pulsing,
typically 30Hz versus 2Hz. That allows detection and triggering to be accomplished via the camera, removing the need for an external sensor, such as a loop or laser. Many systems now trigger the camera continuously and use the images to detect whether a vehicle or other object of interest is present.
“That’s a further cost reducer because there are no or fewer civils involved in installation and the on-going maintenance burden is much less. LEDs themselves have no consumable parts, which further
reduces costs.”
Applications: giving enforcement agencies a sporting chance
Stéphane Clauss ofClauss foresees growing markets within the transport security market. Major users to date include three French national security organisations – the gendarmerie, the police, and the customs authority – who have equipped 1,000 vehicles with an integrated Automatic Number Plate Recognition (ANPR) system designed by systems integrator Survision to fit inside the limited space of their vehicles’ rooftop light bars. The system incorporates seven Sony FCB-EX20DP cameras together with illumination and processing equipment.
Major sporting events are proving to be a rich market. For the London Olympics, ANPR system specialist NDI-RS is working with London boroughs on the deployment of camera networks using its NDI-RS-C320 system to monitor critical routes in and out of the Games’ home zone in Stratford.
At the heart of NDI-RS systems are two high-resolution Sony FCB block cameras. These output images in both infrared (for ANPR capture) and colour (for evidential overview imaging) to the London Metropolitan Police data centre, which runs NDI-RS’s TALON technology.
Brazil is getting ready to stage the Football World Cup in 2014, followed by the next Summer Olympics in 2016. A third of the
vehicles in the country have been brought in illegally, causing a major security problem, and the authorities are using machine vision technology from Curitiba-based FiscalTech to monitor roads. Its handheld RVG-Speed Control device embeds a Sony high-resolution camera with laser speed detection, wireless connectivity and an Optical Character Recognition (OCR) system.
Claimed to be the smallest of its kind, the scanner can monitor up to four traffic lanes simultaneously for traffic volumes and automatically identify vehicle type and colour.
But in many respects, we’re only just beginning to see what this technology can do for the ITS sector, says Gardasoft Vision’s Peter Bhagat: “Specialist applications are now becoming a reality. For example, by having a series of high-quality images taken from different angles, it becomes possible to profile a vehicle, not just identify it by its number plates. That has big implications for security applications. Such things are already being worked on, and by the types of major players who have the ability to get them to market within a relatively short time from now.”
Already, some ANPR cameras can provide more information than previously thought. Identifying a vehicle involves more than just discerning its license plate. Other identifying features such as car make and model provide much-needed information to law enforcement and traffic management personnel. Today ANPR cameras can be self-contained edge-of-network computing units with the ability to provide much sharper images at higher resolution, which provides much more visual
information, and the algorithmic ability to analyse that information automatically, providing data that was once the sole purview of video analytics.
To address customers’ demands for more information from a single system at lower cost and high efficiencies, Israeli company
The VRS’s recognition capabilities enhance and improve vehicle verification and classification and help check correlation between the vehicle type, license plate number, and data stored on police and homeland security databases. The ability to generate immediate alerts improves reaction times to crimes committed. This includes amber alerts in cases of kidnapping.
Less dramatic applications are also benefiting from machine vision technology.
The population of Dubai, in the United Arab Emirates, has more than doubled over 10 years to nearly two million in 2011. The number of vehicles on the road now totals just over one million. That has created a huge demand for parking capacity.
To cope with current levels and expected growth, the
The system is designed to perform accurately at distances of up to 15m in adverse conditions. A central requirement from the RTA was for simplified and speedy management of vehicle entry and exit transactions to optimise gate management and limit waiting time, so reducing on-street traffic congestion. Storage of captured licence plate image data, with each plate character defined in the software and identified using OCR, together with referencing time and date information, provides statistics for traffic level analysis and can restrict parking usage to certain vehicles based on date or time of day.
The camera is connected to an industrial PC running Windows across an Ethernet network via Cat6 Ethernet cable or fibre optics, with a server located at each gate entrance. The company claims that the system can be installed in less than two hours and is low maintenance, needing six-monthly re-calibration to maintain an accuracy exceeding 95 per cent.
The ‘region of interest’ capability of machine vision emerges in a system which has been developed by US company Transport Data Systems (TDS) to capture and decode the official identification numbers that the
State DOT workers rely on computer terminals to check carriers’ safety records using the USDOT number but this typically occurs only during random checks or when a truck has violated a weight restriction, for instance. The new system is designed to work 24 hours a day, in variable outdoor lighting conditions, capturing vehicles moving at very high speeds.
Once the image is captured, the reader extracts the USDOT number and cross-references this against federal agency databases, with the results accessible via a browser and stored for future tracking. To achieve the necessary quality of image capture, the camera had to offer: high sensitivity, to ensure the short exposure time necessary to eliminate motion blur cause by vehicles speeds of over 96km/h; a large field of view in order to contain the whole USDOT number in a single frame; high resolution, to ensure reliable OCR readings; and a high dynamic range to distinguish key image features in all weather conditions.
TDS chose
Each installation varies, depending on the lane geometry of the inspection station, but in a recent DOT installation in Montana trucks pass through a series of sensors along the access road to an inspection station. Signals from these are relayed to a USDOT reader, which in turn triggers the Grasshopper camera; this uses optically isolated hardware trigger output to strobe TDS’s custom-made infrared LED illuminators to provide optimal illumination for accurate OCR reading.