Vehicle lighting has come a long way since the days of “electrics only” when filament headlights just offered a choice between parking light, low beam and high beam. In retrospect, the biggest single change was the introduction of LED technology to the front lights. By using and controlling an LED matrix, the light distribution can be given multiple shapes and lengths, which improve lighting under varying traffic or weather conditions. Initially, LED lighting was understandably often employed to facilitate new visual light designs mostly (“light signatures”), which help differentiate between brands and communicate emotional qualities. However, as LEDs always require an electronic control, the shift to LED technology has paved the way for a much more profound change that has begun to happen: Once there is a light control unit (LCU), its electronics offer completely new functional possibilities. The key to exploiting these new possibilities lies in networking the LCU with other vehicle systems and with the data that is processed by them. Considering that advanced driver assistance systems (ADAS) and highly automated driving (HAD) are based on an increasingly large (sensor) data pool in the vehicle and in the backend, there is great potential in connecting lighting to this growing data pool. New information and more precise information about the vehicle environment can thus be used to improve the level of support that lighting can offer to the driver – or the automation. The many years of expertise in developing and manufacturing LCUs to advance lighting to intelligent lighting is a main contributor to advance lighting to intelligent lighting. By expanding the list of potential inputs for lighting control, new use cases become feasible and the driver can get support that is better and more tailored to the situation. The ultimate goal is to increase the performance of vehicle lighting so that the driver no longer needs to make manual adjustments. Instead of using hard thresholds to activate limited fixed options, intelligent lighting facilitates a new level of performance, which can include predictive lighting. Among the new intelligent lighting functions portrayed below are glare-free lighting, homogeneous lighting, high-definition lighting, and lighting tuned to the needs of HAD.
Historically, lighting was one of the first vehicle systems to “turn electric” around 1910. However, there it remained for almost a century. Admittedly, modern halogen (1962 onwards) or xenon headlights (1991 onwards) compare to the old 6 volt lighting like a jet airplane compares to a double-decker bus. Light intensity, light color and range have been massively improved and bending or cornering light has improved illumination in demanding driving situations. But still, at the end of the day, the driver’s choice – until quite recently – was either ON/OFF or HI/LO. Even fog lights are an optional feature.
Roughly ten years ago this began to change with the introduction of the first LED low beam in 2007 and the first full LED headlight in 2008. Soon after, the need for LED control units provided the starting point for the first steps up the control quality ladder: In 2010 the first LED headlight with Advanced Front Lighting System (AFS) was introduced, followed by an LED headlight with glare-free high beam (GFHB) in 2013. The same year also saw the first glare-free pixel light high beam system. A year after that the first laser high beam application was introduced to the market. In 2015 the first C segment car was already offered with LED matrix lighting (Opel Astra), which marks a breakthrough. By 2016 more than 250 vehicle models across all segments had some kind of LED lighting. The short time distance between recent lighting innovations points the way: Once LED technology and electronics are in place, there is ample room for innovation.
These innovation activities had been significantly supported by Continental over the last decade and many OEM LED (and laser) headlight applications are already equipped with our LCUs (currently 3rd genereation) already. This focus on LED (and laser) lighting technology from day one is owed to the long and deep corporate background of Continental in chassis and body control units that are now being pursued by us, OSRAM Continental. The necessary background in electronics, miniaturization, higher integration levels, thermal management, sensor fusion (ADAS, HAD), electric/electronic architecture (EEA), software development, post-manufacturing function download via the Internet of Things (software-over-the-air, SOTA), connectivity (IoT, V2V) and networking was pivotal in developing LCUs and intelligent lighting technology. It is to be expected that LED technology will continue to gain market share and become more widespread across vehicle models and brands, as illustrated in Fig. 1. What is more, over the next four years the LED fitment rates are expected to rise in all vehicle segments.
Fig. 1: Expected lighting technology trends (based upon an IHS and SA study from 2017, covering Europe, NAFTA, China, India, Japan, and South America)
Without electronics the choice of lighting control exerted by the driver or through automated low/high beam control is restricted to ON/OFF or HI/LO. There is very little chance to add more options that would provide better lighting in the many potential driving situations, which can at times make controlling a car a demanding task because either visibility is poor or the driver is confronted with an overflow of visual stimuli: Poor weather, fog, narrow bending roads, tunnels, oncoming traffic, wet asphalt, hilltops, dips, or the confusing multitude of many-colored inner city lights.
As LED headlights consist of a matrix of individual LEDs or arrays, they bring for the first time the possibility to either switch individual LEDs/arrays on or off in order to produce a very specific light distribution or to change their position via electronically controlled actuators such as stepper motors to achieve the same effect. Both control actions will influence the light distribution and potentially the light range depending on the instantaneous requirements of the driving situation. Fig. 2 provides examples of the multitude of helpful lighting pattern variations that are already part of an AFS solution according to ECE-R 123 (“Adaptive Front- Lighting System”). A standard AFS functional choice includes: Bending Light (BL), Adverse Weather Light (AWL), High Beam (HB), Motorway Light (ML), Country Light (CL), Town Light (TL), and Front Fog-Light (FFL).
Fig. 2: Functional lighting scope of a basic AFS package
From the point of view of ergonomics, good lighting is an absolute precondition for any complex task such as orientation, trajectory planning, micro navigation, rule/limit/sign recognition, lane marking recognition, or the spotting of poorly lit vulnerable road users (VRUs) such as pedestrians and cyclists, see Fig. 3. Considering the responsibility placed on a human driver, good lighting is not a luxury but a mandatory vehicle requirement, which can be met best through intelligent lighting.
Fig. 3: Vulnerable road users will also benefit from intelligent lighting
LED headlights require at least a minimum amount of electronics to control the LED matrix. This functional content is contained in the headlight interface, located directly near the LEDs. Owing to the place of installation and the limited space (between the engine compartment infrastructure and the light), LCUs need to be miniaturized in spite of growing functional content. Also, thermal management is a challenge intensified by miniaturization. Ultimately it is a matter of the vehicle’s electric/electronic architecture (EEA) whether the algorithms for lighting control are integrated in the LCU or whether the headlight interface provides only a minimum level of control. Both strategies – more local intelligence vs. concentration of intelligence in domain controllers – can be seen in the market. It is therefore a practical approach to use a modular software architecture. The intelligent lighting algorithms are independent from the headlight interface and can be allocated in any suitable ECU such as a Body Domain Controller (BDC) or a Chassis Domain Controller (CDC), Fig. 4.
Fig. 4: Intelligent lighting is no stand-alone solution but the result of networking and system integration
It is mostly a strategic decision of the OEM as to what the preferred architecture and partitioning of lighting algorithms is. However, depending on the strategy – distributed intelligence versus pooled intelligence – the networking for data transport in the vehicle needs to be defined accordingly. While a vector-based interface, for instance, will require less bandwidth, the transmission of raw camera data to a larger detached ECU requires a much higher bandwidth. Intelligent lighting reflects these potential variations in a modular software architecture and interfaces for both options.
Intelligent lighting functions will be facilitated by expanding the data basis for activating these lighting functions. Functions such as cornering light and others are already based upon sensor signals on acceleration/speed, steering angle, and wiper activity (Adverse Weather Light) today. The next step on the innovation path is to include data from other sources in the vehicle and from outside the vehicle.
- The in-vehicle data basis can be expanded by using object lists (instead of just “dark areas” ahead) provided by ADAS functions and automated driving solutions through analyzing camera, radar, and LiDAR data (Light Detection And Ranging based on laser technology). This information on objects and their position – which is passed on to algorithms building a comprehensive model of the vehicle environment, complete with available trajectories within applicable traffic rules – can be used to refine intelligent lighting functions such as Glare-Free High Beam (GFHB). Based on information provided by ADAS “intelligence” the LCU can control a pixel headlight accordingly to fade out oncoming vehicles from the light distribution with greater precision and with higher system availability, as illustrated in Fig. 5.
Fig. 5: Glare-Free High Beam (GFHB) is one of the intelligent lighting functions demonstrated in the OSRAM Continental intelligent headlight test car
A static eHorizon can support functions such as bending light with map data on curve radii to activate the function without any reaction time. This type of advanced control, based on static map data is already available in the market. The advanced/intelligent lighting functions GFHB can be facilitated either by stepper motor control or by pixel light control.
Another existing advanced lighting function based on in-vehicle data is Marking Light: An object detected and classified by ADAS as, for instance, a pedestrian walking on the shoulder of a road can be warned by headlight flashing to ensure that there is communication between the vehicle and the person.
- Additional benefits for the driver can be better reaped by adding information from beyond the vehicle to the lighting control with the aim of improving its predictive qualities. The add-on solution eHorizon. Weather provides an example: Based on a dongle, car system status information such as rear fog-light On/Off, wipers On/Off, ABS/ESC activity, and temperature are collected, processed in a backend in real-time and the refined results are communicated as weather alerts via a smartphone app, as imagined in Fig. 6. While the lighting control is still manual in this solution, it points the way. The existing eHorizon.Weather service is the first step on a path that will lead to a vehicle-integrated solution, based on the telematics unit in the car acting as the gateway. As Fig. 7 illustrates, in the future this kind of fog information can be used within a dynamic eHorizon that handles information from the cloud (coming from a service-providing backend) and also from local V2V communication.
Fig. 6: In the future weather alerts can also be used to facilitate predictive lighting functions
As both technologies (cloud/backend and direct V2V) are very likely to exist in parallel within a heterogeneous landscape, probably for decades (with some vehicles still not connected at all), both data links and the smartphone should be seen as complementary to achieve the widest possible coverage and reach. When connected with the LCU, fog information can be used to automatically activate the rear fog-lights and front fog-lights as soon as the car approaches the relevant area. Clearly, this form of intelligent lighting control is also a step towards the servitization of lighting by adding additional benefits.
Another example would be a Connected High Beam system: In this case the information about high beam status coming from other vehicles (through the direct V2V channel and/or through the backend/cloud channel) can serve to control the high beam without driver activity in situations where other cars ahead have dipped their headlights, for instance, because there is a solitary building next to a country road. This lighting information comes from vehicles, which communicate it to one another, and is transmitted to a backend where the data is analyzed, refined and subsequently redistributed to other cars. Again, V2V and backend should not be seen as an “either/or” decision but as a complementary approach that helps to reach as many cars and drivers as possible to improve the quality of lighting control and driver support.
High-definition lighting systems (HD lighting) are a widely discussed topic and will be rolled out in the coming years. In comparison to existing standard matrix light solutions with maybe eight controllable elements per headlight and up to perhaps 84 LEDs each, HD lighting can be based on many different systems, e.g. comprising two or three LED arrays with 1024 pixels each or even higher resolutions. With HD lighting, GFHB systems can be further improved and new functions implemented. To fully utilize the high definition of such matrices, the matrix control requires connectivity and object data from sensor fusion in particular. This data serves to calculate the position of illumination gaps with the greatest possible precision and the optimum shape. Two principal technical solutions are currently being discussed, both of which have specific strengths and weaknesses:
Subtractive systems can be based on Digital Micromirror Devices (DMDs) and Digital Light Processing (DLP) with high resolution. Subtractive DLP lighting functions can only be used within a somewhat limited forward opening angle of around 7°, though, because otherwise there will be too little light in certain areas. The latest option to create subtractive lighting functions is LCD technology.
- Additive solutions are based on laser light (as a white light source) and micro-structured LEDs. These will offer more light.
We are doing research based on the DLP system (in the pixel light test car, mentioned in section 4) in order to deeply understand high-definition lighting systems and to be able to provide ECUs for HD lighting in general.
The high level of freedom that HD lighting offers to define light distribution and range can be utilized to support the driver, to improve the human-machine interaction between driver and car, and to improve the communication between a vehicle and other types of road users. For instance, if a VRU on a poorly lit shoulder or sidewalk is picked up by an ADAS, intelligent lighting can mark out this VRU by directing more light at him/her (without dazzle of course) to establish dual communication: The driver will be made aware of the person through highlighting him/her (in the true sense of the word), while the VRU will realize that he/she has indeed been noticed by the vehicle/driver. This helps to clarify uncertain and potentially dangerous situations. The key to this lies in expanding the human-machine interface to include lighting.
The Augmented Reality Head-Up Display (AR-HUD) offers an additional level of improving adaptive light distribution within a GFHB solution. In principle, computing the size and positioning of the augmentations (= virtual symbols and highlights added to the driver’s real-world view by temporarily overlaying it) requires a very similar solution as a GFHB system. In the case of the augmentation, this function is carried out by the AR Creator control unit. It contains a powerful set of algorithms that calculate the shape, size and position of each augmentation based upon a comprehensive data fusion of the vehicle environment ahead (as seen by the driver) and the vehicle positioning. By networking the AR Creator with the LCU, other vehicles can be faded out from the high beam with greater precision because the light distribution can be matched against a second control input, representing the driver’s view. Based on this concept, it is possible to develop an integrated human- machine interface solution, which includes the cluster instrument, the Head- Up Display (=windscreen or combiner HUD), the AR-HUD, and the headlights. This networking of intelligent lighting with the AR Creator can be applied to plausible future use cases. It would be a helpful addition to the interaction concept between driver and vehicle if augmentations such as navigation symbols (virtual turn-by-turn indicator arrows) were complemented by pointing light towards the intended direction, for instance, if a driver needs to take a right turn at the next crossing.
Highly automated driving (HAD) will very likely change the requirements towards lighting control. For instance, the driver will require less light as long as he/she pursues other activities. However, there must be enough light for the camera sensors and for the driver to check the situation. Also, intelligent lighting can help during the transition process when the automation seeks to give the driving task back to the driver. This hand-over procedure could be supported through lighting, for instance. Thus, intelligent lighting may be able to contribute to the driver’s so-called mode awareness: Making sure that the driver always intuitively understands his or her instantaneous role and responsibility (i.e. “driver” or “observer”) is one key element of an interaction concept for HAD. New vehicle lighting functions could be used to further strengthen this crucial mode awareness. One example is to increase the amount of forward lighting and possibly to widen the light cone at the beginning of the hand-over procedure to give the driver the best possible understanding of the situation. Another valuable contribution intelligent lighting could make to HAD lies in its trust-building quality. If vehicles ahead, recognized objects or traffic signs were pointed out to the driver through lighting, the driver would intuitively understand that his/her vehicle had detected them, which would help to build trust in the automation. Obviously this possible element of the humanmachine interface needs to be part of a holistic approach that takes all pillars of the interaction concept into consideration (i.e. the cluster instrument, display(s), acoustic/haptic communication and the (AR)HUD as well).
On the other hand, a vehicle that is in an automated driving phase may need lighting as an additional element of communication with either other vehicles or pedestrians. During inner city driving, for instance, a driver reading e-mail will not be able to communicate with a pedestrian to signal “I have seen you and will give you the right of way”. Suitable headlight signals could be part of this new need for a human-machine interaction. One of the benefits of this new type of communication is that it works on a very fast communication channel. Intelligent lighting is not only based on new inputs and better information, it can also improve the data basis for HAD: The process is bi-directional as the automation control could be given the authority to request better lighting (= homogeneous lighting) in a specific forward area in order to improve visual object classification, for instance. Despite the massively advanced dynamics of current and upcoming camera generations, this can help the algorithms that process the camera raw data. Obviously a request of this type can only be met within a glare-free approach. Homogeneous lighting will improve the performance of sensors just as it can serve to improve the performance of a human driver.
The necessity to align the functional choice of vehicle lighting with the “unique needs” of HAD is also reflected in the SAE Recommended Practice J3134 “Autonomous Vehicle Lighting”, which is put together by a task force as a first guideline to make “recommendations for standardized solutions to meet these needs”.
Intelligent lighting is about to become a new element of driver assistance. Like ADAS, situation-specific lighting profiles will help the driver and/or support the vehicle automation. The prerequisite for exploiting new lighting functions is connectivity. In the future, data for intelligent lighting control will come from additional sources in the car and from other sources (sensors, eHorizon, V2V, ADAS, HAD) to facilitate functions such as predictive lighting. A system like GFHB, for instance, can be improved via V2V communication, with vehicles approaching each other communicating over the air interface to allow a fast reaction time or anticipatory GFHB.
Intelligent lighting control, capable of learning (or rather, pattern recognition), will be able to process decision-making (such as switching to low beam when approaching a house on a country road) of other vehicles ahead. An appropriate lighting function can thus be activated without reaction time and glare.
Overall, intelligent lighting will serve to increase the performance of vehicle lighting with the aim of freeing the driver from – ideally – all light control actions. Instead the driver will always be provided with the best lighting profile for the instantaneous driving and traffic situation. Rather than focusing on the current hard thresholds of legislative requirements, intelligent lighting will be able to offer higher performance levels. During HAD, intelligent lighting control may be able to meet requirements to improve both the human vision and the machine vision. At the same time, intelligent lighting paves the way for the servitization of lighting through new backend-based services that offer additional benefits.
Intelligent lighting control can also make a contribution to the Road Database, which is to be built up by collecting and processing high-definition vehicle camera input on road landmarks to improve the data basis for intelligent traffic solutions and HAD. If the vehicle position is known precisely and the road database contains relevant lighting information gained from processing the input from many vehicles that have already passed through the area, then the lighting of other vehicles will be able to benefit from this information.
ADB = Adaptive Driving Beam (= glare-free high beam)
ADAS = Advanced Driver Assistance System
AFS = Advanced Front-Lighting System / Adaptive Front-Lighting System
AR-HUD = Augmented Reality Head-Up Display
AWL = Adverse Weather Light
BDC = Body Domain Controller
BL = Bending Light
CDC = Chassis Domain Controller
CoBePa = Continental Backend Platform (for SOTA)
CL = Country Light
DLP = Digital Light Processing
DMD = Digital Micro-mirror Device
ECU = Electronic Control Unit
EEA = Electric/Electronic Architecture
FFL = Front Fog-Light
GFHB = Glare-Free High Beam
HAD = Highly Automated Driving
HB = High Beam
HD lighting = High-definition lighting
HUD = Head-Up Display (windscreen or combiner)
IoT = Internet of Things
LCU = (Exterior) Lighting Control Unit
LED = Light-Emitting Diode
LiDAR = Light Detection And Ranging
ML = Motorway Light
OEM = Original Equipment Manufacturer (i.e. vehicle manufacturer)
SOTA = Software-Over-The-Air (download)
TL = Town Light
V2V = Vehicle-to-Vehicle (communication)
VRU = Vulnerable Road User
 Giegerich, P., Richter, P., Dittmann, H.: Head-Up-Display Algorithmen für Augmentierungen. In: ATZelektronik 9 (2014) No 6, page 64-67
Mr. Maximilian Austerer