Solution overview
A radar-video fusion camera is a visual camera with a fully integrated radar module. As such it can use radar and video – separately or combined – to detect and classify objects.
The benefits of radar-video fusion are more accurate detections and classifications, and less false and missed alarms. The fusion of the two technologies comes together in AXIS Object Analytics, which is the main interface used to access and configure the radar-video fusion.
AXIS Q1656-DLE detects and classifies objects in wide areas with depth, and you can use it for area monitoring or road monitoring. Additionally, AXIS Q1656-DLE works well in a site design combined with other devices. Since the detection range of the radar is larger than the field of view of the camera in AXIS Q1656-DLE, combine it with PTZ cameras with IR illumination to achieve visual confirmation in the entire detection range of the radar. Or combine it with thermal cameras, which can detect and classify objects in long and narrow areas.
Why fusion?
Used on their own, video and radar both have their own strengths and limitations:
The video typically provides more accurate classifications when there is sufficient contrast and when the object is moving close to the camera. It will also provide more granular classifications than the radar. However, a camera needs good lighting conditions to see.
The radar on the other hand can detect objects even in challenging lighting conditions, and its detection and classification range is longer. Regardless of the weather conditions, the radar can measure the speed of a moving object, as well as its direction and the distance to it. However, the lack of visual confirmation can make the radar classifications more fragile. Swaying objects and reflective surfaces can trigger false alarms and must be taken into consideration when designing the site and configuring the radar.
The two technologies in the radar-video fusion camera can of course be used on their own but are more powerful when the analytics from both technologies interact to provide more reliable detections and classifications.
Radar-video fusion explained
- This product fuses the radar data with the video data in two ways:
Visual fusion: The radar detections and classifications are fused into the video image. This is a way to visualize the radar data in the video stream when the video analytics isn’t available.
For example, if an object appears at a distance of 50 m (164 ft), it may be too small for the video analytics to detect, but the radar can identify it. In that case, the radar detection is fused into the image plane and can be used to trigger alarms inside AXIS Object Analytics.
Analytics fusion: The radar detections and classifications are fused with the detections and classifications from the video analytics. This gives the device a combined analytics output where the respective strengths of both technologies are merged. It uses the distance and speed from the radar, and the position and class from the video.
When the object in the example above comes closer, the video analytics also detects it. The radar detection is then fused with the video analytics output to produce an output of higher quality, and with more information, than what the technologies can provide separately.
Installation
Preview mode
Preview mode is ideal for installers when fine tuning the camera view during the installation. No login is required to access the camera view in preview mode. It is available only in factory defaulted state for a limited time from powering up the device.
Installation guide
The installation guide and other documents for this product can be found on axis.com/products/axis-q1656-dle/support#support-resources
Considerations
Where to install the product
To get the best video and radar coverage, the product must be appropriately mounted. Consider the following when you mount a radar-video fusion camera:
Area or road monitoring
This product is intended for monitoring open areas and you can use it either for area monitoring or road monitoring. For installation examples and use cases, see Area installation and Road installation.
Avoid solid and reflective objects
Solid and metal objects can affect the performance of the radar in AXIS Q1656-DLE. Most solid objects (such as walls, fences, trees, or large bushes) in the coverage area will create a blind spot (radar shadow) behind them. Metal objects in the field of view cause reflections that affect the ability of the radar to perform classifications. This can lead to ghost tracks and false alarms in the radar stream.
For information about how to handle solid and reflective objects in the coverage area of the radar, see Add exclude zones.
Mounting position
Install the product on a stable pole or a spot on a wall where there are no other objects or installations. Objects within 1 m (3 ft) to the left and right of the product, that reflect radio waves, affect the performance of the radar in AXIS Q1656-DLE.
If you install the product on a wall, it needs to point away from the wall at least 15°.
Additionally, the mounting height affects the detection distance and range of both the video and the radar.
Tilt angle
The product must be sufficiently oriented towards the ground so that the center of the image is below the horizon. The recommended mounting tilt is 15–45°.
You can add an overlay in the radar’s live view that shows the tilt angle of the product. For instructions, see Show a text overlay with the tilt angle of the radar.
Roll angle
The product's roll angle must be nearly equal to zero, which means that the image should be level with the horizon.
Coexistence
If you mount more than eight radars or radar-video fusion cameras operating on the 60 GHz frequency band close together, they may interfere with each other. To avoid interference, see Install multiple Axis radar devices.
Install multiple Axis radar devices
Coexistence
The radio waves of the radar in AXIS Q1656-DLE continue beyond the detection area, and can interfere with other radars up to 350 m (380 yd) away. This is called a coexistence zone.
AXIS Q1656-DLE operates on the 60 GHz frequency band. You can install up to eight Axis radars or radar-video fusion cameras operating on the 60 GHz frequency band close to each other, or facing each other, without causing problems. The built-in coexistence algorithm can find a suitable time slot and frequency channel that will minimize interference.
If an installation contains more than eight radar devices operating on the same frequency band, and many of the devices are pointing away from each other, there is less risk of interference. In general, radar interference will not cause the radar to stop functioning. There is a built-in interference mitigation algorithm that tries to repair the radar signal even when interference is present. A warning about interference is expected to happen in an environment with many radars operating on the same frequency band in the same coexistence zone. The main impact of interference is deterioration of the detection performance, and occasional ghost tracks.
You can combine the radar-video fusion camera with Axis radars operating on another frequency band without having to think about coexistence. Axis radar devices that are operating on different frequency bands will not interfere with each other.
Radar coverage
The radar in AXIS Q1656-DLE has a horizontal field of detection of 95°. The detection range of the radar depends on factors like the scene, the mounting height and tilt angle of the product, and the size and speed of the moving objects.
The detection range also depends on the monitoring profile you select. You can use AXIS Q1656-DLE for area or road monitoring and there are two profiles in the radar that are optimized for each one of the scenarios:
Area monitoring profile: the radar tracks and classifies humans, vehicles and unknown objects moving at speeds lower than 55 km/h (34 mph). For information about detection range, see Area detection range.
Road monitoring profile: the radar mainly tracks and classifies vehicles moving at speeds up to 200 km/h (125 mph). For information about detection range, see Road detection range.
- Note
When the radar and video is combined in AXIS Object Analytics, AXIS Q1656-DLE can classify vehicle subclasses (buses, cars, bikes, trucks, and other).
Select the area or monitoring profile in the product’s web interface. For instructions, see Select a radar profile.
Area of coverage
The radar in this device has a horizontal field of detection of 95°. The area of coverage corresponds to 2700 m2 (29000 ft2) for humans and 6100 m2 (65600 ft2) for vehicles.
Optimal area coverage applies when the product is mounted at 3.5–7 m (11–23 ft). The mounting height will affect the size of the blind spot below the radar.
Area detection range
The detection range is the distance within which an object can be tracked and can trigger an alarm. It’s measured from a near detection limit (how close to the device a detection can be made) to a far detection limit (how far from the device a detection can be made).
The area monitoring profile is optimized for human detection, however, it will also allow you to track vehicles and other objects moving at up to 55 km/h (34 mph) with a speed accuracy of +/- 2 km/h (1.25 mph).
When mounted at the optimal installation height, the detection ranges are:
5 – 60 m (16–200 ft) when detecting a human
5 – 90 m (16–300 ft) when detecting a vehicle
- Enter the mounting height in the web interface when you calibrate the radar.
- The detection range is affected by the scene and the product’s tilt angle.
- The detection range is affected by the moving object type and size.
The radar detection range was measured under these conditions:
The range was measured along the ground.
The object was a person with a height of 170 cm(5 ft 7 in).
The person was walking straight in front of the radar.
The values were measured when the person entered the detection zone.
The radar sensitivity was set to Medium.
Mounting height | 15° tilt | 20° tilt | 25° tilt | 30° tilt | 35° tilt | 40° tilt | 45° tilt |
---|---|---|---|---|---|---|---|
3.5 m (11 ft) | 6.0–60+ m (19–196+ ft) | 5.0–60+ m (16–196+ ft) | 4.0–60+ m (13–196+ ft) | 4.0–60 m (13–196 ft) | 4.0–55 m (13– 180 ft) | 4.0–40 m (13–131 ft) | 4.0–30 m (13–98 ft) |
4.5 m (14 ft) | 6.0–60+ m (19–196+ ft) | 6.0–60+ m (19–196+ ft) | 5.0–60+ m (16–196+ ft) | 4.0–60+ m (13–96+ ft) | 4.0–60 m (13–196 ft) | 4.0–45 m (13–147 ft) | 4.0–40 m (13–131 ft) |
6 m (19 ft) | 10–60+ m (32–196+ ft) | 9.0–60+ m (29–196+ ft) | 7.0–60+ m (22–196+ ft) | 6.0–60+ m (19–196+ ft) | 6.0–60 m (19–196 ft) | 5.0–55 m (16–180 ft) | 5.0–55 m (16–180 ft) |
8 m (26 ft) | 16–60 m (52–196 ft) | 14–60 m (45–196 ft) | 10–60 m (32–196 ft) | 8.0–60+ m (26–196+ ft) | 8.0–60+ m (26–196+ ft) | 7.0–60 m (22–196 ft) | 7.0–60 m (22–196 ft) |
10 m (32 ft) | 21–60 m (68–196 ft) | 19–60 m (62–196 ft) | 14–60 m (45–196 ft) | 12–60+ m (39–196+ ft) | 10–60+ m (32–196+ ft) | 9.0–60 m (29–196 ft) | 9.0–60 m (29–196 ft) |
12 m (39 ft) | 25–60 m (82–196 ft) | 23–60 m (75–196 ft) | 19–60 m (62–196 ft) | 16–60+ m (52–196+ ft) | 13–60+ m (42–196+ ft) | 11–60 m (36–196 ft) | 11–55 m (36–180 ft) |
- Setting the radar sensitivity to Low will decrease the detection range by 20% while setting it to High will increase the detection range by 20%.
- In installations where you expect small animals to appear outside the fusion zone, but still in the detection zone of the radar, you can minimize the false alarms by setting the radar sensitivity to Low. This will however reduce the detection range.
Road detection range
The road monitoring profile is optimized for detection of vehicles and provides a speed accuracy of +/- 2 km/h (1.24 mph) when monitoring vehicles moving at up to 200 km/h (125 mph).
The mounting height of the radar-video fusion camera and the vehicle speed will impact the detection range of the radar. When mounted at an optimal installation height, the radar detects approaching and departing vehicles with a speed accuracy of +/- 2 km/h (1.24 mph) within the following ranges:
25–100 m (82–328 ft) for vehicles moving at 50 km/h (31 mph).
40–80 m (131–262 ft) for vehicles moving at 100 km/h (62 mph).
50–70 m (164–230 ft) for vehicles moving at 200 km/h (125 mph).
To minimize the risk of missed detections of vehicles travelling in high speeds, set up a scenario in the radar that triggers on the object types Vehicle and Unknown. For more information about how to set up a scenario, see Add scenarios.
Radar-video fusion coverage
The analytics fusion zone, which is the zone in where an object can be detected and classified by both technologies, depends on factors like:
The installation height of the camera.
The tilt angle of the camera.
The zoom level of the camera lens.
The lighting conditions of the surrounding environment, and of the light provided by the camera itself and other devices in the site.
The distance to the moving object.
Once the radar-video fusion camera has been installed, the radar coverage is fixed. The field of view of the camera, however, depends on the zoom level of the lens.
To visualize the camera’s field of view in relation to the radar coverage, there are two green lines in the radar stream that represent the approximate field of view of the camera. The lines will adjust when the camera zooms in or out. Additionally, there are two dotted lines that represent the approximate area in which the camera can see. The dotted line closer to the device represent the near detection limit, while the one farther away represents the far detection limit.
Zoom level examples
The size of the analytics fusion zone is affected by the zoom level of the lens in AXIS Q1656-DLE. The two extremes of the zoom level are described below.
Lens zoomed out (max wide)
When the lens in AXIS Q1656-DLE is zoomed out maximally, objects can get too small to detect for the video analytics. In this scenario, it's likely that objects will be detected by the radar with its wide coverage, but not by the video analytics. If you want to establish visual confirmation in the entire detection range of the radar, you can pair AXIS Q1656-DLE with one or more PTZ cameras.
Lens zoomed in (max tele)
When the lens is zoomed in maximally, it will limit the camera’s field of view significantly. However, since objects far away are magnified compared to when the lens is zoomed out maximally, it means the objects can be detected by the video analytics at a much larger distance from the device. In this scenario, it’s possible that objects will be detected by the video analytics but not by the radar analytics.
To maximize the chance of accurately classifying an object by both the radar and the video analytics, adjust the zoom – if possible – so that objects in the area of interest are large enough for the video analytics to detect them.
Radar-video detections and classifications
Since AXIS Q1656-DLE can detect and classify objects using radar and video — or just one of the technologies — there are several things to be aware of.
If two people are walking close together and are detected by the radar, but not the video analytics, they will be classified as one person and only one bounding box will surround them. When they enter the analytics fusion zone and visual confirmation is achieved, they will be accurately classified. The spatial differentiation of the radar in AXIS Q1656-DLE is 3 m(9 ft).
If an object is outside the camera's field of view, AXIS Q1656-DLE can't fuse any detections or classifications into the image plane. This means that AXIS Object Analytics can't trigger an alarm. To trigger an alarm when an object is detected by the radar only, configure a scenario in the radar’s web interface and use conditions to trigger on motion within the radar scenario.
The exclude zones you add in the radar’s web interface are global, which means that any motion detected in these zones will always be ignored – even if the exclude zone overlaps with the analytics fusion zone in AXIS Object Analytics. The exclude zones you add in AXIS Object Analytics, however, will only ignore motion in AXIS Object Analytics scenarios.
Area installation
To get the best radar performance in area installations, select the area monitoring profile in AXIS Q1656-DLE. For more information, see Select a radar profile.
Area installation examples
You can place multiple radar-video fusion cameras side-by-side to create a virtual fence, for example along or around a building.
For 180° radar coverage, place two AXIS Q1656-DLE next to each other. When you install more than one pair of radar-video fusion cameras side-by-side, we recommend placing them with 100 m (330 ft) spacing between each pair, as shown in the example.
You can install up to eight radar-video fusion cameras close together without interference between the radars. For more information about placing Axis radar devices close together, see Install multiple Axis radar devices.
Area monitoring use cases
Cover an open field around a building
A company in an office building needs to secure the premises from intrusion and vandalism, particularly after working hours, on weekends and on public holidays. To cover the area around the building, they install a combination of radar-video fusion cameras and PTZ cameras. They configure the radar-video fusion cameras to trigger an alarm when humans and vehicles approach the building. To get as reliable detections and classifications as possible, they select a detection sensitivity in AXIS Object Analytics suitable for the area. For more information about detection sensitivity, see Select detection sensitivity.
To make sure they get visual confirmation of potential intruders in the entire detection range of the radar, they add two PTZ cameras with built-in IR on the opposite corners of the building. The radars steer the PTZ cameras through AXIS Radar Autotracking for PTZ, and the built-in IR also provides more light for the radar-video fusion cameras, which makes it possible to detect and identify intruders on a greater distance.
Cover a fenced building
A warehouse that normally keeps goods on the premises is surrounded by a fence to keep intruders away. To detect potential trespassers, they install a combination of radar-video fusion cameras and PTZ cameras with built-in IR to secure the premises. The radar-video fusion cameras provide reliable detections and trigger alarms, while the PTZ cameras extend the visual coverage. The PTZ cameras’ built-in IR also provides more light for the radar-video fusion cameras, which makes it possible to detect and identify intruders on a greater distance.
In this scene, the area outside the fence is not covered since it’s a busy area that can trigger false alarms. In scenes with less activity, the area outside the fence could be covered as well. In such a scene, it would be possible to configure the cameras to trigger external lights when movement is detected outside the fence to deter potential intruders. They could also trigger an alarm when intruders are actually detected inside the fence. To be able to detect movement outside the fence, the cameras need to be mounted high enough.
Cover a critical asset
A telecom shelter that contains critical equipment and cabling is surrounded by a fence to keep intruders away. To avoid tampering and sabotage, they need additional protection. Since it’s important to keep false alarms to a minimum, they install two radar-video fusion cameras in the opposite corners of the site. The cameras can together cover the shelter, the antennas and the grounds. With the use of both the radar and video technology in the radar-video fusion cameras, the cameras can provide reliable detections and classifications of potential trespassers.
It’s possible to place radar-video fusion cameras facing each other in this way without interference between the radars. However, to make sure the video technology can provide accurate detections and classifications, good lighting conditions are required.
Cover the area around a loading bay
The loading bay of a commercial building is surrounded by a fence to protect the premises. For additional security, the company installs a thermal camera and three radar-video fusion cameras on the site. To detect potential trespassers, they install a thermal camera along the fence. To detect intruders that have managed to pass the fence, they install two of the radar-video fusion cameras on a pole facing the loading docks. These cameras will be able to detect and classify humans and vehicles moving around the docks and can trigger an alarm after working hours. To detect any intruders coming through the area with the turning point on the right side, they install an additional radar-video fusion camera facing the area. Finally, the thermal camera can also help to detect tampering attempts of the two cameras installed close to the fence.
Road installation
To get the best radar performance in road installations, select the road monitoring profile in AXIS Q1656-DLE. For more information, see Select a radar profile.
Road installation examples
When monitoring roads and highways, make sure to mount the radar-video fusion camera at a sufficient height to avoid blind spots (radar shadow) behind the vehicles.
The size of the radar shadow depends on the radar-video fusion camera’s mounting height and the vehicles’ height and distance from the radar. For example, when a vehicle with a height of 4.5 m (15 ft) is 50 m (164 ft) away from a radar-video fusion cameras that is mounted at a height of 8 m (26 ft), the radar shadow behind the vehicle will be 50 m (164 ft). However, if the radar-video fusion camera is mounted at a height of 12 m (39 ft), the shadow behind the same vehicle will only be 23 m (74 ft).
Side mounted
To monitor vehicles travelling along a road you can mount the radar-video fusion camera on the side of the road, for example on a pole. In this type of installation, we recommend a pan angle of max 25°.
To make it possible for the radar in AXIS Q1656-DLE to measure high speeds accurately, position the radar-video fusion camera within a lateral distance of 10 m (32 ft) from the vehicles. For more information about detection range and velocity accuracy, see Road detection range.
Center mounted
To monitor vehicles on a multi-lane road, you can mount one or more radar-video fusion cameras on a gantry above the road.
The same type of installation is possible if you want to monitor vehicles that drive away from the radar-video fusion camera, instead of driving towards it.
You can also place the radar-video fusion camera on a gantry overlooking a pedestrian crossing with traffic lights, for example to log the speeds of departing vehicles, or detect speed violations.
To make it possible for the radar in AXIS Q1656-DLE to measure high speeds accurately, position the radar-video fusion camera within a lateral distance of 10 m (32 ft) from the vehicles. For more information about detection range and velocity accuracy, see Road detection range.
Road monitoring use cases
Wrong-way detection on a highway ramp
To detect and identify vehicles driving in the wrong direction on a highway ramp, traffic control mounts AXIS Q1656-DLE on a pole facing the ramp. For reliable detections, they set up a line crossing scenario in the radar pages of the device’s web interface and configure it so that vehicles must cross two lines to trigger an alarm. In the radar scenario, they position the two lines on the ramp as seen in the illustration and specify the driving direction and speeds to trigger on. With this configuration, the radar will trigger the alarm, and the camera can provide visual identification of the vehicle on the ramp. For information about how to set up a radar scenario, see Add scenarios.
Monitor traffic flow at an intersection – queue build-up
To monitor how and when queues build up in a busy intersection, traffic control installs AXIS Q1656-DLE on a gantry above the intersection. They set up an object in area scenario in AXIS Object Analytics that will trigger on vehicles moving in an area. They shape the scenario to only cover the part of the road leading up to the intersection and select a detection sensitivity that suits the scene. To trigger an alarm when queues start to build up, they configure the scenario to trigger on vehicles moving at speeds below 5 km/h (3 mph). For information about how to configure an AXIS Object Analytics scenario and select a suitable detection sensitivity, see Configure AXIS Object Analytics.
Monitor traffic flow at an intersection – direction
To get an overview of the traffic flow and the direction vehicles travel in a busy intersection, traffic control installs AXIS Q1656-DLE on a gantry above the road leading up to the intersection. They set up a line crossing scenario in the radar pages of the device’s web interface where vehicles must cross two lines to trigger an alarm. When they configure the radar scenario, they place the first of the two lines over the lanes leading up to the intersection, just after the pedestrian crossing to avoid vehicles stopping at the line. They place the second line over the lanes leading to the right. The vehicles must cross both lines in the specified direction to trigger an alarm. To avoid triggering on more than one vehicle per crossing, they lower the minimum trigger duration in the radar scenario from 2 to 0 seconds.
To monitor the traffic flow in all directions, they create one radar scenario for each direction. For information about how to set up a radar scenario, see Add scenarios.
The radar scenario doesn’t count the vehicles crossing the lines, instead you can use the event system in the device’s web interface to keep count. One way to count vehicles is to send an MQTT message each time the radar scenario triggers, and count the triggers on the MQTT receiver side.
Get started
Find the device on the network
To find Axis devices on the network and assign them IP addresses in Windows®, use AXIS IP Utility or AXIS Device Manager. Both applications are free and can be downloaded from axis.com/support.
For more information about how to find and assign IP addresses, go to How to assign an IP address and access your device.
Browser support
You can use the device with the following browsers:
ChromeTM | Firefox® | EdgeTM | Safari® | |
Windows® | recommended | recommended | ✓ | |
macOS® | recommended | recommended | ✓ | ✓ |
Linux® | recommended | recommended | ✓ | |
Other operating systems | ✓ | ✓ | ✓ | ✓* |
*To use AXIS OS web interface with iOS 15 or iPadOS 15, go to Settings > Safari > Advanced > Experimental Features and disable NSURLSession Websocket.
Open the device's web interface
Open a browser and type the IP address or host name of the Axis device.
If you do not know the IP address, use AXIS IP Utility or AXIS Device Manager to find the device on the network.
Type the username and password. If you access the device for the first time, you must create an administrator account. See Create an administrator account.
For descriptions of all the controls and options in the device’s web interface, see The web interface.
Create an administrator account
The first time you log in to your device, you must create an administrator account.
Enter a username.
Enter a password. See Secure passwords.
Re-enter the password.
Accept the license agreement.
Click Add account.
The device has no default account. If you lose the password for your administrator account, you must reset the device. See Reset to factory default settings.
Secure passwords
Axis devices send the initially set password in clear text over the network. To protect your device after the first login, set up a secure and encrypted HTTPS connection and then change the password.
The device password is the primary protection for your data and services. Axis devices do not impose a password policy as they may be used in various types of installations.
To protect your data we strongly recommend that you:
Use a password with at least 8 characters, preferably created by a password generator.
Don’t expose the password.
Change the password at a recurring interval, at least once a year.
Verify that no one has tampered with the device software
- To make sure that the device has its original AXIS OS, or to take full control of the device after a security attack:
Reset to factory default settings. See Reset to factory default settings.
After the reset, secure boot guarantees the state of the device.
Configure and install the device.
Web interface overview
This video gives you an overview of the device’s web interface.
Configure your device
Basic settings
Set the power line frequency
Go to Video > Installation > Power line frequency.
Click Change.
Select a power line frequency and click Save and restart.
Set the capture mode
Go to Video > Installation > Capture mode.
Click Change.
Select a capture mode and click Save and restart.
See also Capture modes.
Adjust the image
This section includes instructions about configuring your device. If you want to learn more about how certain features work, go to Learn more.
Select exposure mode
To improve image quality for specific surveillance scenes, use exposure modes. Exposure modes lets you control aperture, shutter speed, and gain. Go to Video > Image > Exposure and select between the following exposure modes:
For most use cases, select Automatic exposure.
For environments with certain artificial lighting, for example fluorescent lighting, select Flicker-free.
Select the same frequency as the power line frequency.
For environments with certain artificial light and bright light, for example outdoors with fluorescent lighting at night and sun during daytime, select Flicker-reduced.
Select the same frequency as the power line frequency.
To lock the current exposure settings, select Hold current.
Optimize IR illumination
Depending on the installation environment and the conditions around the camera, for example external light sources in the scene, you can sometimes improve the image quality if you manually adjust the intensity of the LEDs. If you have problems with reflections from the LEDs, you can try to reduce the intensity.
Go to Video > Image > Day-night mode.
Turn on Allow illumination.
Click in the live view and select Manual.
Adjust the intensity.
Benefit from IR light in low-light conditions by using night mode
Your camera uses visible light to deliver color images during the day. But as the visible light diminishes, color images become less bright and clear. If you switch to night mode when this happens, the camera uses both visible and near-infrared light to deliver bright and detailed black-and-white images instead. You can set the camera to switch to night mode automatically.
Go to Video > Image > Day-night mode, and make sure that the IR-cut filter is set to Auto.
To use the built-in IR light when the camera is in night mode, turn on Allow illumination and Synchronize illumination.
Reduce noise in low-light conditions
To reduce noise in low-light conditions, you can adjust one or more of the following settings:
Adjust the trade-off between noise and motion blur. Go to Video > Image > Exposure and move the Blur-noise trade-off slider toward Low noise.
Set the exposure mode to automatic.
- Note
A high max shutter value can result in motion blur.
To slow down the shutter speed, set max shutter to the highest possible value.
- Note
When you reduce the max gain, the image can become darker.
Set the max gain to a lower value.
If there is an Aperture slider, move it towards Open.
Reduce sharpness in the image, under Video > Image > Appearance.
Reduce motion blur in low-light conditions
- To reduce motion blur in low-light conditions, adjust one or more of the following settings in Video > Image > Exposure:
- Note
When you increase the gain, image noise also increases.
Set Max shutter to a shorter time, and Max gain to a higher value.
- If you still have problems with motion blur:
Increase the light level in the scene.
Mount the camera so that objects move toward it or away from it rather than sideways.
Maximize the details in an image
If you maximize the details in an image, the bitrate will probably increase and you might get a reduced frame rate.
Make sure to select the capture mode that has the highest resolution.
Go to Video > Stream > General and set the compression as low as possible.
Below the live view image, click and in Video format, select MJPEG.
Go to Video > Stream > Zipstream and select Off.
Handle scenes with strong backlight
Dynamic range is the difference in light levels in an image. In some cases the difference between the darkest and the brightest areas can be significant. The result is often an image where either the dark or the bright areas are visible. Wide dynamic range (WDR) makes both dark and bright areas of the image visible.
- WDR can cause artifacts in the image.
- WDR may not be available for all capture modes.
Go to Video > Image > Wide dynamic range.
Turn on WDR.
Use the Local contrast slider to adjust the amount of WDR.
If you still have problems, go to Exposure and adjust the Exposure zone to cover the area of interest.
Find out more about WDR and how to use it at axis.com/web-articles/wdr.
Stabilize a shaky image with image stabilization
Image stabilization is suitable in environments where the product is mounted in an exposed location where vibrations can occur, for example, due to wind or passing traffic.
The feature makes the image smoother, steadier, and less blurry. It also reduces the file size of the compressed image and lowers the bitrate of the video stream.
When you turn on image stabilization, the image is slightly cropped, which lowers the maximum resolution.
Go to Video > Installation > Image correction.
Turn on Image stabilization.
Hide parts of the image with privacy masks
You can create one or several privacy masks to hide parts of the image.
Go to Video > Privacy masks.
Click .
Click the new mask and type a name.
Adjust the size and placement of the privacy mask according to your needs.
To change the color for all privacy masks, click Privacy masks and select a color.
See also Privacy masks
Show an image overlay
You can add an image as an overlay in the video stream.
Go to Video > Overlays.
Select Image and click .
Click Images.
Drag and drop an image.
Click Upload.
Click Manage overlay.
Select the image and a position. You can also drag the overlay image in the live view to change the position.
Show the radar live view in the image
Use onscreen controls to see both the live view of the video and the radar in the same stream.
Go to Video > Image.
Click in the live view to access the product’s onscreen controls.
Select Predefined controls.
Turn on Radar picture-in-picture.
Click Enable picture-in-picture.
To change the size of the radar projection, click Resize picture-in-picture.
To change the position of the radar projection, click Move picture-in-picture.
Add street names and compass direction to the image
The street name and compass direction will be visible on all video streams and recordings.
Go to Apps.
Select axis-orientationaid.
Click Open.
To add a street name, click Add text and modify the text to fit the street.
To add a compass, click Add compass and modify the compass to fit the image.
Record and watch video
- Record video directly from the camera
Go to Video > Image.
To start a recording, click .
If you haven’t set up any storage, click and . For instructions on how to set up network storage, see Set up network storage
To stop recording, click again.
- Watch video
Go to Recordings.
Click for your recording in the list.
View and record video
This section includes instructions about configuring your device. To learn more about how streaming and storage works, go to Streaming and storage.
Reduce bandwidth and storage
Reducing the bandwidth can lead to loss of detail in the image.
Go to Video > Stream.
Click in the live view.
Select Video format H.264.
Go to Video > Stream > General and increase Compression.
Go to Video > Stream > Zipstream and do one or more of the following:
- Note
The Zipstream settings are used for both H.264 and H.265.
Select the Zipstream Strength that you want to use.
Turn on Optimize for storage. This can only be used if the video management software supports B-frames.
Turn on Dynamic FPS.
Turn on Dynamic GOP and set a high Upper limit GOP length value.
Most web browsers don’t support H.265 decoding and because of this the device doesn’t support it in its web interface. Instead you can use a video management system or application that supports H.265 decoding.
Set up network storage
- To store recordings on the network, you need to set up your network storage.
Go to System > Storage.
Click Add network storage under Network storage.
Type the IP address of the host server.
Type the name of the shared location on the host server under Network share.
Type the username and password.
Select the SMB version or leave it on Auto.
Select Add share without testing if you experience temporary connection issues, or if the share is not yet configured.
Click Add.
Configure the radar
The radar-video fusion camera is factory-calibrated so that the camera and radar module are perfectly aligned. Do not move or remove the lens, optical unit or radar module since this will undo the calibration and alignment.
Select a radar profile
The radar in this radar-video fusion camera has two profiles; one that is optimized for area monitoring, and one that is optimized for road monitoring. Select the profile that suits your type of installation.
- In the web interface:
Go to Radar > Settings > Detection.
Select a profile under Radar profiles.
Set the mounting height
Set the mounting height of the device in the radar's web interface. This helps the radar to detect and measure the speed of passing objects correctly.
Measure the height from the ground up to the device as accurately as possible. For scenes with uneven surfaces, set the value that represents the average height in the scene.
If the height is set incorrectly , the bounding boxes that appear in AXIS Object Analytics when an object is detected will not appear in the accurate position.
Go to Radar > Settings > General.
Set the height under Mounting height.
You can also set the mounting height in AXIS Object Analytics. Setting the height in one place will automatically populate the mounting height in the other.
Go to Apps > AXIS Object Analytics.
Turn on the application and click Open.
Click Settings.
Set the height under Mounting height.
Validate the mounting height
To validate that you have measured and set the correct mounting height of the device, add an augmented overlay in the camera’s live view. The overlay consists of white bounding boxes projected around moving objects.
Go to Video > Image.
Click in the live view to access the device’s onscreen controls.
Expand Predefined controls.
Turn on Augmented overlay (radar).
Click Toggle augmented bounding boxes.
Ask someone to move in the scene you’re monitoring and check in the camera’s live view that the bounding boxes are projected around the moving objects, and not above, below, or next to them.
If necessary, re-measure the mounting height, adjust the settings, and check again.
- Turn off the augmented overlay when you’re done with the validation.
If the scene contains variations in elevation, use the autocalibration feature to improve the accuracy of the bounding boxes based on radar detections. For more information, see Autocalibrate the device.
Calibrate a reference map
Upload a reference map to make it easier to see where detected objects are moving. You can use a ground plan or an aerial photo that shows the area covered by the radar. Calibrate the map so the radar coverage fits the position, direction, and scale of the map, and zoom in on the map if you're interested in a specific part of the radar coverage.
You can either use a setup assistant that takes you through the map calibration step by step, or edit each setting individually.
- Use the setup assistant:
Go to Radar > Map calibration.
Click Setup assistant and follow the instructions.
- To remove the uploaded map and the settings you have added, click Reset calibration.
- Edit each setting individually:
- The map will calibrate gradually after you adjust each setting.
Go to Radar > Map calibration > Map.
Select the image you want to upload, or drag and drop it in the designated area.
To reuse a map image with its current pan and zoom settings, click Download map.
Under Rotate map, use the slider to rotate the map into position.
Go to Scale and distance on a map and click on two pre-determined points on the map.
Under Distance, add the actual distance between the two points you have added to the map.
Go to Pan and zoom map and use the buttons to pan the map image, or zoom in and out on the map image.
- Note
The zoom function does not alter the radar’s area of coverage. Even if parts of the coverage is out of view after zooming, the radar will still detect moving objects in the entire area of coverage. The only way to exclude detected movement is to add exclude zones. For more information, see Add exclude zones.
Go to Radar position and use the buttons to move or rotate the position of the radar on the map.
- To remove the uploaded map and the settings you have added, click Reset calibration.
Set detection zones
To determine where to detect motion, you can add one or more detection zones. Use different zones to trigger different actions.
- There are two types of zones:
A scenario (previously called include zone) is an area in which moving objects will trigger rules. The default scenario matches the entire area covered by the radar.
An exclude zone is an area in which moving objects will be ignored. Use exclude zones if there are areas inside a scenario that trigger a lot of unwanted alarms.
Add scenarios
A scenario is a combination of triggering conditions and detection settings, which you can use to create rules in the event system. Add scenarios if you want to create different rules for different parts of the scene.
- Add a scenario:
Go to Radar > Scenarios.
Click Add scenario.
Type the name of the scenario.
Select if you want to trigger on objects moving in an area or on objects crossing one, or two, lines.
- Trigger on objects moving in an area:
Select Movement in area.
Click Next.
Select the type of zone that should be included in the scenario.
Use the mouse to move and shape the zone so that it covers the desired part of the radar image or reference map.
Click Next.
Add detection settings.
Add seconds until trigger after under Ignore short-lived objects.
Select which object type to trigger on under Trigger on object type.
Add a range for the speed limit under Speed limit.
Click Next.
Set the minimum duration of the alarm under Minimum trigger duration.
Click Save.
- Trigger on objects crossing a line:
Select Line crossing.
Click Next.
Position the line in the scene.
Use the mouse to move and shape the line.
To change the detection direction, turn on Change direction.
Click Next.
Add detection settings.
Add seconds until trigger after under Ignore short-lived objects.
Select which object type to trigger on under Trigger on object type.
Add a range for the speed limit under Speed limit.
Click Next.
Set the minimum duration of the alarm under Minimum trigger duration.
The default value is set to 2 seconds. If you want the scenario to trigger every time an object crosses the line, lower the duration to 0 seconds.
Click Save.
- Trigger on objects crossing two lines:
Select Line crossing.
Click Next.
To make the object cross two lines for the alarm to trigger, turn on Require crossing of two lines.
Position the lines in the scene.
Use the mouse to move and shape the line.
To change the detection direction, turn on Change direction.
Click Next.
Add detection settings.
Set the time limit between crossing the first and the second line under Max time between crossings.
Select which object type to trigger on under Trigger on object type.
Add a range for the speed limit under Speed limit.
Click Next.
Set the minimum duration of the alarm under Minimum trigger duration.
The default value is set to 2 seconds. If you want the scenario to trigger every time an object has crossed the two lines, lower the duration to 0 seconds.
Click Save.
Add exclude zones
Exclude zones are areas in which moving objects will be ignored. Add exclude zones to ignore, for example, swaying foliage on the side of a road. You could also add exclude zones to ignore ghost tracks caused by radar-reflective materials, for example a metal fence.
- Add an exclude zone:
Go to Radar > Exclude zones.
Click Add exclude zone.
Use the mouse to move and shape the zone so that it covers the desired part of the radar view or reference map.
Autocalibrate the device
Autocalibration of the radar-video fusion camera improves the accuracy of the bounding boxes that appear around detected objects in AXIS Object Analytics. With autocalibration, the device uses information from the video, like height and angular accuracy, to improve the positioning of the bounding boxes based on the radar detections.
The autocalibration doesn’t affect the detections, only the visualization of the bounding boxes.
- For elevation calibration:
Go to Radar > Autocalibration > Elevation.
Turn on Autocalibration.
The autocalibration occurs as soon as calibration data is available.
Select a Smoothing option.
If your scene contains little variation in elevation, leave Smoothing set to High.
If your scene is hilly or sloped, or if it contains stairs or high buildings, set Smoothing to Low to keep the differences in elevation.
Visualize the result of the calibration in the web interface with the following options:
Show elevation pattern shows the vertical distance from the ground up to the camera in a pattern of colored dots.
Show color legend shows a legend containing the colors of the elevation pattern and the vertical distance that each color represent.
Show reference area shows the area which the calibration is based on.
- For azimuth calibration:
Go to Radar > Autocalibration > Azimuth.
Turn on Autocalibration.
The autocalibration occurs as soon as calibration data is available.
Show a text overlay with the tilt angle of the radar
You can add an overlay in the radar’s live view that shows the tilt angle of the radar. This is helpful during installation, or whenever you need to know the tilt angle of the device.
The tilt angle overlay shows “90” when the device is horizontal. If the overlay shows “75”, the tilt angle of the radar is 15° below the horizon.
Go to Radar > Overlays.
Select Text and click .
Type #op.
You can also click Modifier and select #op from the list.
Select a position. You can also drag the overlay field in the live view to change the position.
Configure AXIS Object Analytics
AXIS Object Analytics is an AI-based application that detects and classifies moving objects. It’s also the main interface for configuring the radar-video fusion in AXIS Q1656-DLE. The real-time output of the fusion can only be seen in the video stream inside a scenario configured in the application.
Create a scenario
Use scenarios in AXIS Object Analytics to define the detection settings and triggering conditions for your radar-video fusion camera.
In the device’s web interface, go to Apps > AXIS Object Analytics.
Start the application and click Open.
In the welcome screen, click Step-by-step and follow the recommended setup procedure.
In Considerations, read through the information and click Finish.
Click + New scenario.
- Note
By default, the scenarios called Object in area and Line crossing use both video and radar input. The other scenarios in AXIS Object Analytics use video input only.
Select a scenario based on your requirements.
Select the type of object you want the application to detect.
Configure your scenario.
Verify your settings and click Finish.
You have now created a scenario in AXIS Object Analytics. To modify the scenario and apply additional settings, click Open. For scenarios that use both radar and video input, you can use speed to trigger and select a detection sensitivity. For instructions, see:
To learn more about AXIS Object Analytics and its general settings, see AXIS Object Analytics user manual.
Some of the considerations and features described in AXIS Object Analytics user manual don’t apply to radar-video fusion cameras.
Use speed to trigger
If you have created an Object in area or Line crossing scenario in AXIS Object Analytics, you can trigger on objects moving within a set speed range, or above and below it.
Go to Apps > AXIS Object Analytics.
Start the application and click Open.
Select the scenario you want to modify and click Open.
Go to Object speed and turn on Use speed to trigger.
Set the speed range to trigger on.
If you want to trigger on speeds above and below your set range, click Invert.
Select detection sensitivity
With the option to select detection sensitivity, you can decide if you want to trigger on detections made by either the video or the radar, or detections made by both. You can also let the device itself, based on the fusion algorithms, decide if it should rely on one of the technologies, or both.
This option is available in Object in area and Line crossing scenarios.
Go to Apps > AXIS Object Analytics.
Start the application and click Open.
Select the scenario you want to modify and click Open.
Go to Detection sensitivity and select one of the following options:
Low sensitivity: Requires both the radar and the camera to detect the object. This reduces the risk of false alarms, but increases the risk of missed detections.
To make sure that objects can be detected by both technologies, the scene can't be too complex. The light conditions need to be good, the detection area needs to be within the detection range of both technologies, and there should preferably not be any disturbing elements, such as trees or bushes.
Automatic: Lets the application decide if both the radar and the camera, or only one of them, is required to detect the object. This is the default option.
High sensitivity: Requires either the radar or the camera to detect the object. This increases the risk of false alarms, but reduces the risk of missed detections.
The lighting conditions and the size of the detection area are less important when you select high sensitivity since you only need one of the technologies to detect the object.
You can improve the accuracy of the bounding boxes that appear around detected objects in AXIS Object Analytics with the autocalibration feature. Autocalibration doesn’t affect the detections, only the visualization of the bounding boxes.
For more information, see Autocalibrate the device.
Minimize false alarms
If you notice that you get too many false alarms, you can filter out certain types of movement or objects, change the coverage, or adjust the detection sensitivity. See which settings work best for your environment.
Adjust the detection sensitivity of AXIS Object Analytics:
Go to Apps > AXIS Object Analytics, open a scenario, and select a lower Detection sensitivity.
Low sensitivity: Requires both the radar and the camera to detect the object. Reduced risk of false alarms, but increased risk of missed detections.
Automatic: Lets the application decide if both radar and camera or only one of them is required to detect the object.
High sensitivity: Requires either the radar or the camera to detect the object. Increased risk of false alarms but reduced risk of missed detections.
Adjust the detection sensitivity of the radar:
Go to Radar > Settings > Detection and select a lower Detection sensitivity. This decreases the risk of false alarms, but it could also cause the radar to miss some movement.
Low: Use this sensitivity when there are a lot of metal objects or large vehicles in the area. It will take longer time for the radar to track and classify objects. This can reduce the detection range, especially for fast moving objects.
Medium: This is the default setting.
High: Use this sensitivity when you have an open field without metal objects in front of the radar. This will increase the detection range for humans.
Modify scenarios and exclude zones:
If a scenario includes hard surfaces, such as a metal wall, there may be reflections that causes multiple detections for a single physical object. You can either modify the shape of the scenario, or add an exclude zone that ignores certain parts of the scenario. For more information, see Add scenarios and Add exclude zones.
Trigger on objects crossing two lines instead of one:
If a line crossing scenario includes swaying objects or animals moving around, there is a risk that an object will happen to cross the line and trigger a false alarm. In this case, you can configure the scenario to trigger only when an object has crossed two lines. For more information, see Add scenarios.
Filter on movement:
Go to Radar > Settings > Detection and select Ignore swaying objects. This setting minimizes false alarms from trees, bushes, and flagpoles in the coverage zone.
Go to Radar > Settings > Detection and select Ignore small objects. This setting minimizes false alarms from small objects in the coverage zone, such as cats and rabbits.
Filter on time:
Go to Radar > Scenarios.
Select a scenario, and click to modify its settings.
Select a higher value under Seconds until trigger. This is the delay time from when the radar starts tracking an object until it can trigger and alarm. The timer starts when the radar first detects the object, not when the object enters the specified zone in the scenario.
Filter on object type:
Go to Radar > Scenarios.
Select a scenario, and click to modify its settings.
To avoid triggering on specific object types, deselect the object types that should not trigger events in the scenario.
- Note
The object type setting only affects the radar. It will be ignored by AXIS Object Analytics.
Set up rules for events
To learn more, check out our guide Get started with rules for events.
Save power when no motion is detected
This example explains how to turn on power saving mode when no motion is detected in the scene.
When you turn on power saving mode, the IR illumination range is reduced.
- Make sure that AXIS Object Analytics is running:
Go to Apps > AXIS Object Analytics.
Start the application if it is not already running.
Make sure you have set up the application according to your needs.
- Create a rule:
Go to System > Events and add a rule.
Type a name for the rule.
In the list of conditions, under Application, select Object Analytics.
Select Invert this condition.
In the list of actions, under Power saving mode, select Use power saving mode while the rule is active.
Click Save.
Trigger an alarm if someone opens the enclosure
This example explains how to trigger an alarm if someone opens the housing or casing of the device.
- Add a recipient:
Go to System > Events > Recipients and click Add recipient.
Type a name for the recipient.
Select Email.
Type an email address to send the email to.
The camera doesn’t have it’s own email server, so it will need to log into another email server to be able to send mails. Fill in the rest of the information according to your email provider.
To send a test email, click Test.
Click Save.
- Create a rule:
Go to System > Events > Rules and add a rule.
Type a name for the rule.
In the list of conditions, select Casing open.
In the list of actions, select Send notification to email.
Select a recipient from the list.
Type a subject and a message for the email.
Click Save.
Send an email if someone covers the radar with a metallic object
This example explains how to create a rule that sends an email notification when someone tampers with the radar by covering it with a metallic object, such as metallic foil or a metallic sheet.
The option to create rules for radar tampering events is available from AXIS OS 11.11.
- Add an email recipient:
Go to System > Events > Recipients and click Add recipient.
Type a name for the recipient.
Select Email.
Type an email address to send the email to.
The camera doesn’t have it’s own email server, so it will need to log into another email server to be able to send mails. Fill in the rest of the information according to your email provider.
To send a test email, click Test.
Click Save.
- Create a rule:
Go to System > Events and add a rule.
Type a name for the rule.
From the list of conditions, under Device status, select Radar data failure.
Under Reason, select Tampering.
From the list of actions, under Notifications, select Send notification to email.
Select the recipient you created.
Type a subject and a message for the email.
Click Save.
Control a PTZ camera with the radar
It’s possible to use the information about objects’ positions from the radar to make a PTZ camera track objects. There are two ways to do this:
Control a PTZ camera with the built-in radar autotracking service. The built-in option is suitable when you have a PTZ camera and radar mounted very close together.
Control a PTZ camera with AXIS Radar Autotracking for PTZ. The Windows application is suitable when you want to use multiple PTZ cameras and radars for tracking objects.
Use an NTP server to synchronize the time on the cameras and the Windows computer. If the clocks are out of sync, you may experience delays in the tracking, or ghost tracking.
Control a PTZ camera with the built-in radar autotracking service
The built-in radar autotracking creates an edge-to-edge solution where the radar directly controls the PTZ camera. It supports all Axis PTZ cameras.
You can use the built-in radar autotracking service to connect one radar with one PTZ camera. For a setup where you want to use more than one radar or PTZ camera, use AXIS Radar Autotracking for PTZ. For more information, see Control a PTZ camera with AXIS Radar Autotracking for PTZ.
This instruction explains how to pair the radar with a PTZ camera, how to calibrate the devices, and how to set up the tracking of objects.
Before you start:
Define the area of interest and avoid unwanted alarms by setting up exclude zones in the radar. Make sure to exclude zones with radar-reflective materials or swaying objects, like foliage, to prevent the PTZ camera from tracking irrelevant objects. For instructions, see Add exclude zones.
- Pair the radar with the PTZ camera:
Go to System > Edge-to-edge > PTZ pairing.
Enter the IP address, username and password for the PTZ camera.
Click Connect.
Click Configure Radar autotracking or go to Radar > Radar PTZ autotracking to set up radar autotracking.
- Calibrate the radar and the PTZ camera:
Go to Radar > Radar PTZ autotracking.
To set the mounting height of the camera, go to Camera mounting height.
To pan the PTZ camera so that it points in the same direction as the radar, go to Pan alignment.
If you need to adjust the tilt to compensate for a sloping ground, go to Ground incline offset and add an offset in degrees.
- Set up the PTZ tracking:
Go to Track to select if you want to track humans, vehicles and/or unknown objects.
To start tracking objects with the PTZ camera, turn on Tracking.
The tracking automatically zooms in on an object, or a group of objects, to keep them in the view of the camera.
Turn on Object switching if you expect multiple objects that won’t fit in the camera view.
With this setting, the radar gives priority of the objects to track.
To determine how many seconds to track each object, set Object hold time.
To make the PTZ camera return to its home position when the radar no longer tracks any objects, turn on Return to home.
To determine how long the PTZ camera should stay at the tracked objects last known position before returning to home, set Return to home timeout.
To fine tune the zoom of the PTZ camera, adjust the zoom on the slider.
Control a PTZ camera with AXIS Radar Autotracking for PTZ
AXIS Radar Autotracking for PTZ is a server-based solution that can handle different setups when tracking objects:
Control several PTZ cameras with one radar.
Control one PTZ camera with several radars.
Control several PTZ cameras with several radars.
Control one PTZ camera with one radar when they are mounted in different positions covering the same area.
The application is compatible with a specific set of PTZ cameras. For more information, see axis.com/products/axis-radar-autotracking-for-ptz#compatible-products.
Download the application and see the user manual for information about how to set up the application. For more information, see axis.com/products/axis-radar-autotracking-for-ptz/support.
Use MQTT to send radar data
Use the radar-video fusion camera with the application AXIS Speed Monitor to collect radar data for detected objects and send it over MQTT.
This example explains how to set up an MQTT client in the device where you have installed AXIS Speed Monitor, and how to create a condition that will publish the radar data collected in AXIS Speed Monitor as a payload to an MQTT broker.
- Before you start:
Install AXIS Speed Monitor in your radar-video fusion camera, or install it in a camera that you connect to the radar in the radar-video fusion camera.
For more information, see AXIS Speed Monitor user manual.
Set up an MQTT broker and get the broker’s IP address, username and password.
Learn more about MQTT and MQTT brokers in AXIS OS Knowledge Base.
- Set up the MQTT client in the web interface of the device where you have installed AXIS Speed Monitor:
Go to System > MQTT > MQTT client > Broker and enter the following information:
Host: The broker IP address
Client ID: The ID of the device
Protocol: The protocol the broker is set to
Port: The port number used by the broker
The broker Username and Password
Click Save and Connect.
- Create a condition that publishes the radar data as a payload to the MQTT broker:
Go to System > MQTT > MQTT publication and click + Add condition.
In the list of conditions, under Application, select Speed Monitor: Track exited zone.
The device will now be able to send information about the radar tracks for every moving object that exits a scenario. Every object will have its own radar track parameters, for example rmd_zone_name, tracking_id, and trigger_count. You can find the full list of parameters in AXIS Speed Monitor user manual.
Record video when the camera detects an object
This example explains how to set up the camera to start recording to the SD card when the camera detects an object. The recording will include five seconds before detection and one minute after detection ends.
Before you start:
Make sure you have an SD card installed.
- Make sure that AXIS Object Analytics is running:
Go to Apps > AXIS Object Analytics.
Start the application if it is not already running.
Make sure you have set up the application according to your needs.
- Create a rule:
Go to System > Events and add a rule.
Type a name for the rule.
In the list of conditions, under Application, select Object Analytics.
In the list of actions, under Recordings, select Record video while the rule is active.
In the list of storage options, select SD_DISK.
Select a camera and a stream profile.
Set the prebuffer time to 5 seconds.
Set the postbuffer time to 1 minute.
Click Save.
Provide visual indication of an ongoing event
You have the option to connect the AXIS I/O Indication LED to your network camera. This LED can be configured to turn on whenever certain events occur in the camera. For example, to let people know that video recording is in progress.
Required hardware
AXIS I/O Indication LED
An Axis network video camera
- Note
For instructions on how to connect the AXIS I/O Indication LED, see the installation guide provided with the product.
The following example shows how to configure a rule that turns on the AXIS I/O Indication LED to indicate that camera is recording.
Go to System > Accessories > I/O ports.
For the port that you connected the AXIS I/O Indication LED to, click to set the direction to Output, and click to set the normal state to Circuit open.
Go to System > Events.
Create a new rule.
Select the Condition that must be met to trigger the camera to start recording. It can, for example, be a time schedule or motion detection.
In the list of actions, select Record video. Select a storage space. Select a stream profile or create a new. Also set the Prebuffer and Postbuffer as required.
Save the rule.
Create a second rule and select the same Condition as in the first rule.
In the list of actions, select Toggle I/O while the rule is active, and then select the port the AXIS I/O Indication LED is connected to. Set the state to Active.
Save the rule.
- Other scenarios where AXIS I/O Indication LED can be used are for example:
Configure the LED to turn on when the camera starts, to indicate the presence of the camera. Select System ready as a condition.
Configure the LED to turn on when live stream is active to indicate that a person or a program is accessing a stream from the camera. Select Live stream accessed as a condition.
Show a text overlay in the video stream when the device detects an object
This example explains how to display the text “Motion detected” when the device detects an object.
- Make sure that AXIS Object Analytics is running:
Go to Apps > AXIS Object Analytics.
Start the application if it is not already running.
Make sure you have set up the application according to your needs.
- Add the overlay text:
Go to Video > Overlays.
Under Overlays, select Text and click .
Enter
#D
in the text field.Choose text size and appearance.
To position the text overlay, click and select an option.
- Create a rule:
Go to System > Events and add a rule.
Type a name for the rule.
In the list of conditions, under Application, select Object Analytics.
In the list of actions, under Overlay text, select Use overlay text.
Select a video channel.
In Text, type “Motion detected”.
Set the duration.
Click Save.
If you update the overlay text it will be automatically updated on all video streams dynamically.
Record video when a PIR detector senses motion
This example explains how to connect a PIR detector (normally closed) to the device, and to start recording video when the detector senses motion.
- Required hardware
3-wire cable (ground, power, I/O)
PIR detector, normally closed
Disconnect the device from power before connecting the wires. Reconnect to power after all connections are done.
- Connect the wires to the device’s I/O connector
- Note
For information on the I/O connector, see Connectors.
Connect the ground wire to pin 1 (GND/-).
Connect the power wire to pin 2 (12V DC output).
Connect the I/O wire to pin 3 (I/O input).
- Connect the wires to the PIR detector’s I/O connector
Connect the other end of the ground wire to pin 1 (GND/-).
Connect the other end of the power wire to pin 2 (DC input/+).
Connect the other end of the I/O wire to pin 3 (I/O output).
Configure the I/O port in the device web interface
Go to System > Accessories > I/O ports.
Click to set the direction to input for port 1.
Give the input module a descriptive name, for example “PIR detector”.
If you want to trigger an event whenever the PIR detector senses motion, click to set the normal state to circuit closed.
- Create a rule
Go to System > Events and add a rule.
Type a name for the rule.
In the list of conditions, select PIR detector.
In the list of actions, under Recordings, select Record video while the rule is active.
In the list of storage options, select SD_DISK.
Select a camera and a stream profile.
Set the prebuffer time to 5 seconds.
Set the postbuffer time to 1 minute.
Click Save.
Record video when the camera detects loud noises
This example explains how to set up the camera to start recording to the SD card five seconds before it detects loud noise and to stop two minutes after.
The following instructions require that a microphone is connected to audio-in.
- Turn on audio:
Set up the stream profile to include audio, see Add audio to your recording.
- Turn on audio detection:
Go to System > Detectors > Audio detection.
Adjust the sound level according to your needs.
- Create a rule:
Go to System > Events and add a rule.
Type a name for the rule.
In the list of conditions, under Audio, select Audio Detection.
In the list of actions, under Recordings, select Record video.
In the list of storage options, select SD_DISK.
Select the stream profile where audio has been turned on.
Set the prebuffer time to 5 seconds.
Set the postbuffer time to 2 minutes.
Click Save.
Detect tampering with input signal
This example explains how to send an email when the input signal is cut or short-circuited. For more information about the I/O connector, see I/O connector.
Go to System > Accessories and turn on Supervised for the relevant port.
- Add an email recipient:
Go to System > Events > Recipients and add a recipient.
Type a name for the recipient.
Select Email.
Type an email address to send the email to.
The camera doesn’t have it’s own email server, so it has to log into another email server to send mails. Fill in the rest of the information according to your email provider.
To send a test email, click Test.
Click Save.
- Create a rule:
Go to System > Events > Rules and add a rule.
Type a name for the rule.
In the list of conditions, under I/O, select Supervised input tampering is active.
Select the relevant port.
In the list of actions, under Notifications, select Send notification to email and then select the recipient from the list.
Type a subject and a message for the email.
Click Save.
Audio
Add audio to your recording
- Turn on audio:
Go to Video > Stream > Audio and include audio.
If the device has more than one input source, select the correct one in Source.
Go to Audio > Device settings and turn on the correct input source.
If you make any changes to the input source, click Apply changes.
- Edit the stream profile that is used for the recording:
Go to System > Stream profiles and select the stream profile.
Select Include audio and turn it on.
Click Save.
The web interface
To reach the device’s web interface, type the device’s IP address in a web browser.
Support for the features and settings described in this section varies between devices. This icon indicates that the feature or setting is only available in some devices.
Show or hide the main menu. Access the release notes. Access the product help. Change the language. Set light theme or dark theme. The user menu contains:
The context menu contains:
|
Status
Security
Shows what kind of access to the device that is active, what encryption protocols are in use, and if unsigned apps are allowed. Recommendations to the settings are based on the AXIS OS Hardening Guide.
Hardening guide: Link to AXIS OS Hardening guide where you can learn more about cybersecurity on Axis devices and best practices. |
Time sync status
Shows NTP synchronization information, including if the device is in sync with an NTP server and the time remaining until the next sync.
NTP settings: View and update the NTP settings. Takes you to the Time and location page where you can change the NTP settings. |
Ongoing recordings
Shows ongoing recordings and their designated storage space.
Recordings: View ongoing and filtered recordings and their source. For more information, see Recordings Shows the storage space where the recording is saved. |
Device info
Shows the device information, including AXIS OS version and serial number.
Upgrade AXIS OS: Upgrade the software on your device. Takes you to the Maintenance page where you can do the upgrade. |
Connected clients
Shows the number of connections and connected clients.
View details: View and update the list of connected clients. The list shows IP address, protocol, port, state, and PID/process of each connection. |
AXIS Image Health Analytics
Shows the status of the preinstalled application AXIS Image Health Analytics and if the application has detected any issues.
Go to apps: Go to the Apps page where you can manage your installed applications. Open application: Open AXIS Image Health Analytics in a new browser tab. |
Video
Click to play the live video stream. Click to freeze the live video stream. Click to take a snapshot of the live video stream. The file is saved in the ‘Downloads’ folder on your computer. The image file name is [snapshot_YYYY_MM_DD_HH_MM_SS.jpg]. The size of the snapshot depends on the compression that the specific web-browser engine where the snapshot is received applies, therefore, the snapshot size may vary from the actual compression setting that is configured in the device. Click to show I/O output ports. Use the switch to open or close the circuit of a port, for example, to test external devices. Click to manually turn on or turn off the IR illumination. Click to manually turn on or turn off the white light. Click to access onscreen controls:
Starts the washer. When the sequence starts, the camera moves to the configured position to receive the wash spray. When the whole wash sequence is completed, the camera returns to its previous position. This icon is only visible when the washer is connected and configured. Starts the wiper. Click and select a preset position to go to that preset position in the live view. Or, click Setup to go to the preset position page. Adds or removes a focus recall area. When you add a focus recall area, the camera saves the focus settings at that specific pan/tilt range. When you have set a focus recall area and the camera enters that area in the live view, the camera recalls the previously saved focus. It’s enough to cover half of the area for the camera to recall the focus. Click to select a guard tour, then click Start to play the guard tour. Or, click Setup to go to the guard tours page. Click to manually turn on the heater for a selected period of time. Click to start a continuous recording of the live video stream. Click again to stop the recording. If a recording is ongoing, it will resume automatically after a reboot. Click to show the storage that is configured for the device. To configure the storage, you need to be logged in as an administrator. Click to access more settings:
Click to show the live view at full resolution. If the full resolution is larger than your screen size, use the smaller image to navigate in the image. Click to show the live video stream in full screen. Press Esc to exit full screen mode. |
Installation
Capture mode: A capture mode is a preset configuration that defines how the camera captures images. When you change the capture mode, it can affect many other settings, such as view areas and privacy masks. Mounting position: The orientation of the image can change depending on how you mount the camera. Power line frequency: To minimize image flicker, select the frequency your region uses. The American regions usually use 60 Hz. The rest of the world mostly uses 50 Hz. If you're not sure of your region's power line frequency, check with the local authorities. |
Zoom: Use the slider to adjust the zoom level. Autofocus after zooming: Turn on to enable autofocus after zooming. Focus: Use the slider to manually set the focus. AF: Click to make the camera focus on the selected area. If you don’t select an autofocus area, the camera focuses on the entire scene. Autofocus area: Click to show the autofocus area. This area should include the area of interest. Reset focus: Click to make the focus return to its original position. Note In cold environments, it can take several minutes for the zoom and focus to become available. |
Image correction
Important We recommend you not to use multiple image correction features at the same time, since it can lead to performance issues. Barrel distortion correction (BDC): Turn on to get a straighter image if it suffers from barrel distortion. Barrel distortion is a lens effect that makes the image appear curved and bent outwards. The condition is seen more clearly when the image is zoomed out. Crop: Use the slider to adjust the correction level. A lower level means that the image width is kept at the expense of image height and resolution. A higher level means that image height and resolution are kept at the expense of image width. Remove distortion: Use the slider to adjust the correction level. Pucker means that the image width is kept at the expense of image height and resolution. Bloat means that image height and resolution are kept at the expense of image width. Image stabilization: Turn on to get a smoother and steadier image with less blur. We recommend that you use image stabilization in environments where the device is mounted in an exposed location and subject to vibrations due to, for example, wind or passing traffic. Focal length: Use the slider to adjust the focal length. A higher value leads to higher magnification and a narrower angle of view, while a lower value leads to a lower magnification and a wider angle of view. Stabilizer margin: Use the slider to adjust the size of the stabilizer margin, which determines the level of vibration to stabilize. If the product is mounted in an environment with a lot of vibration, move the slider towards Max. As a result, a smaller scene is captured. If the environment has less vibration, move the slider towards Min. Straighten image: Turn on and use the slider to straighten the image horizontally by rotating and cropping it digitally. The functionality is useful when it’s not possible to mount the camera exactly level. Ideally, straighten the image during installation. : Click to show a supporting grid in the image. : Click to hide the grid. |
Image
Appearance
Scene profile: Select a scene profile that suits your surveillance scenario. A scene profile optimizes image settings, including color level, brightness, sharpness, contrast, and local contrast, for a specific environment or purpose.
Saturation: Use the slider to adjust the color intensity. You can, for example, get a grayscale image. Contrast: Use the slider to adjust the difference between light and dark. Brightness: Use the slider to adjust the light intensity. This can make objects easier to see. Brightness is applied after image capture, and doesn’t affect the information in the image. To get more details from a dark area, it’s usually better to increase gain or exposure time. Sharpness: Use the slider to make objects in the image appear sharper by adjusting the edge contrast. If you increase the sharpness, it may increase the bitrate and the amount of storage space needed as well. |
Wide dynamic range
WDR: Turn on to make both bright and dark areas of the image visible. Local contrast: Use the slider to adjust the contrast of the image. A higher value makes the contrast higher between dark and light areas. Tone mapping: Use the slider to adjust the amount of tone mapping that is applied to the image. If the value is set to zero, only the standard gamma correction is applied, while a higher value increases the visibility of the darkest and brightest parts in the image. |
White balance
When the camera detects the color temperature of the incoming light, it can adjust the image to make the colors look more natural. If this is not sufficient, you can select a suitable light source from the list.
The automatic white balance setting reduces the risk of color flicker by adapting to changes gradually. If the lighting changes, or when the camera is first started, it can take up to 30 seconds to adapt to the new light source. If there is more than one type of light source in a scene, that is, they differ in color temperature, the dominating light source acts as a reference for the automatic white balance algorithm. This behavior can be overridden by choosing a fixed white balance setting that matches the light source you want to use as a reference.
Light environment:
|
Day-night mode
IR-cut filter:
Threshold: Use the slider to adjust the light threshold where the camera changes from day mode to night mode.
IR light If your device doesn’t have built-in illumination, these controls are only available when you connect a supporting Axis accessory. Allow illumination: Turn on to let the camera use the built-in light in night mode. Synchronize illumination: Turn on to automatically synchronize the illumination with the surrounding light. The synchronization between day and night only works if the IR-cut filter is set to Auto or Off. Automatic illumination angle: Turn on to use the automatic illumination angle. Turn off to set the illumination angle manually. Illumination angle: Use the slider to manually set the illumination angle, for example, if the angle needs to be different from the camera’s angle of view. If the camera has a wide angle of view, you can set the illumination angle to a narrower angle, which equals a greater tele position. This will result in dark corners in the image. IR wavelength: Select the desired wavelength for the IR light. White light Allow illumination: Turn on to let the camera use white light in night mode. Synchronize illumination: Turn on to automatically synchronize the white light with the surrounding light. |
Exposure
Select an exposure mode to reduce rapidly changing irregular effects in the image, for example, flicker produced by different types of light sources. We recommend you to use the automatic exposure mode, or the same frequency as your power network.
Exposure mode:
Exposure zone: Use exposure zones to optimize the exposure in a selected part of the scene, for example, the area in front of an entrance door. Note The exposure zones are related to the original image (unrotated), and the names of the zones apply to the original image. This means, for example, that if the video stream is rotated 90°, then the Upper zone becomes the Right zone in the stream, and Left becomes Lower.
Max shutter: Select the shutter speed to provide the best image. Low shutter speeds (longer exposure) might cause motion blur when there is movement, and a too high shutter speed might affect the image quality. Max shutter works with max gain to improve the image. Max gain: Select the suitable max gain. If you increase the max gain, it improves the visible level of detail in dark images, but also increases the noise level. More noise can also result in increased use of bandwidth and storage. If you set the max gain to a high value, images can differ a lot if the light conditions are very different from day to night. Max gain works with max shutter to improve the image. Motion-adaptive exposure: Select to reduce motion blur in low-light conditions. Blur-noise trade-off: Use the slider to adjust the priority between motion blur and noise. If you want to prioritize low bandwidth and have less noise at the expense of details in moving objects, move the slider towards Low noise. If you want to prioritize the preservation of details in moving objects at the expense of noise and bandwidth, move the slider towards Low motion blur. Note You can change the exposure either by adjusting the exposure time or by adjusting the gain. If you increase the exposure time, it results in more motion blur, and if you increase the gain, it results in more noise. If you adjust the Blur-noise trade-off towards Low noise, the automatic exposure will prioritize longer exposure times over increasing gain, and the opposite if you adjust the trade-off towards Low motion blur. Both the gain and exposure time will eventually reach their maximum values in low-light conditions, regardless of the priority set. Lock aperture: Turn on to keep the aperture size set by the Aperture slider. Turn off to allow the camera to automatically adjust the aperture size. You can, for example, lock the aperture for scenes with permanent light conditions. Aperture: Use the slider to adjust the aperture size, that is, how much light passes through the lens. To allow more light to enter the sensor and thereby produce a brighter image in low-light conditions, move the slider towards Open. An open aperture also reduces the depth of field, which means that objects close to or far from the camera can appear unfocused. To allow more of the image to be in focus, move the slider towards Closed. Exposure level: Use the slider to adjust the image exposure. Defog: Turn on to detect the effects of foggy weather and automatically remove them for a clearer image. Note We recommend you not to turn on Defog in scenes with low contrast, large light level variations, or when the autofocus is slightly off. This can affect the image quality, for example, by increasing the contrast. Furthermore, too much light can negatively impact the image quality when defog is active. |
Optics
Temperature compensation: Turn on if you want the focus position to be corrected based on the temperature in the optics. IR compensation: Turn on if you want the focus position to be corrected when IR-cut filter is off and when there is IR light. Calibrate zoom and focus: Click to reset the optics and the zoom and focus settings to the factory default position. You need to do this if the optics have lost calibration during transport, or if the device has been exposed to extreme vibrations. |
Stream
General
Resolution: Select the image resolution suitable for the surveillance scene. A higher resolution increases bandwidth and storage. Frame rate: To avoid bandwidth problems on the network or reduce storage size, you can limit the frame rate to a fixed amount. If you leave the frame rate at zero, the frame rate is kept at the highest possible rate under the current conditions. A higher frame rate requires more bandwidth and storage capacity. P-frames: A P-frame is a predicted image that shows only the changes in the image from the previous frame. Enter the desired number of P-frames. The higher the number, the less bandwidth is required. However, if there is network congestion, there could be a noticeable deterioration in the video quality. Compression: Use the slider to adjust the image compression. High compression results in a lower bitrate and lower image quality. Low compression improves the image quality, but uses more bandwidth and storage when you record. Signed video: Turn on to add the signed video feature to the video. Signed video protects the video from tampering by adding cryptographic signatures to the video. |
Zipstream
Zipstream is a bitrate reduction technology, optimized for video surveillance, that reduces the average bitrate in an H.264 or H.265 stream in real time. Axis Zipstream applies a high bitrate in scenes where there are multiple regions of interest, for example, in scenes with moving objects. When the scene is more static, Zipstream applies a lower bitrate, and thereby reduces the required storage. To learn more, see Reducing the bit rate with Axis Zipstream
Optimize for storage: Turn on to minimize the bitrate while maintaining quality. The optimization does not apply to the stream shown in the web client. This can only be used if your VMS supports B-frames. Turning on Optimize for storage also turns on Dynamic GOP. Dynamic FPS (frames per second): Turn on to allow the bandwidth to vary based on the level of activity in the scene. More activity requires more bandwidth. Lower limit: Enter a value to adjust the frame rate between minimal fps and the stream default fps based on scene motion. We recommend you to use lower limit in scenes with very little motion, where the fps could drop to 1 or lower. Dynamic GOP (Group of Pictures): Turn on to dynamically adjust the interval between I-frames based on the level of activity in the scene. Upper limit: Enter a maximum GOP length, that is, the maximum number of P-frames between two I-frames. An I-frame is a self-contained image frame that is independent of other frames. |
Bitrate control
|
Audio
Include: Turn on to use audio in the video stream. Source: Select what audio source to use. Stereo: Turn on to include built-in audio as well as audio from an external microphone. |
Overlays
: Click to add an overlay. Select the type of overlay from the dropdown list:
Widget: Meter: Show a bar chart that displays the most recently measured data value.
|
Privacy masks
: Click to create a new privacy mask. Privacy masks: Click to change the color of all privacy masks, or to delete all privacy masks permanently. Cell size: If you choose mosaic color, the privacy masks appear as pixilated patterns. Use the slider to change the size of the pixels. Mask x: Click to rename, disable, or permanently delete the mask. |
Radar
Settings
General
Radar transmission: Use this to turn off the radar module completely. Channel: If you have problems with multiple devices interfering with each other, select the same channel for up to four devices that are close to each other. For most installations, select Auto to let the devices automatically negotiate which channel to use. Mounting height: Enter the mounting height for the product. Note Be as specific as you can when you enter the mounting height. This helps the device visualize the radar detection in the correct position in the image. |
Detection
Detection sensitivity: Select how sensitive the radar should be. A higher value means that you get a longer detection range, but there is also a higher risk of false alarms. A lower sensitivity decreases the number of false alarms, but it may shorten the detection range. Radar profile: Select a profile that suits your area of interest.
|
View
Information legend: Turn on to show a legend containing the object types the radar can detect and track. Drag and drop to move the information legend. Zone opacity: Select how opaque or transparent the coverage zone should be. Grid opacity: Select how opaque or transparent the grid should be. Color scheme: Select a theme for the radar visualization. Rotation: Select the preferred orientation of the radar image. |
Object visualization
Trail lifetime: Select how long the trail of a tracked object is visible in the radar view. Icon style: Select the icon style of the tracked objects in the radar view. For plain triangles, select Triangle. For representative symbols, select Symbol. The icons will point in the direction the tracked objects are moving, regardless of style.
|
Stream
General
Resolution: Select the image resolution suitable for the surveillance scene. A higher resolution increases bandwidth and storage. Frame rate: To avoid bandwidth problems on the network or reduce storage size, you can limit the frame rate to a fixed amount. If you leave the frame rate at zero, the frame rate is kept at the highest possible rate under the current conditions. A higher frame rate requires more bandwidth and storage capacity. P-frames: A P-frame is a predicted image that shows only the changes in the image from the previous frame. Enter the desired number of P-frames. The higher the number, the less bandwidth is required. However, if there is network congestion, there could be a noticeable deterioration in the video quality. Compression: Use the slider to adjust the image compression. High compression results in a lower bitrate and lower image quality. Low compression improves the image quality, but uses more bandwidth and storage when you record. Signed video: Turn on to add the signed video feature to the video. Signed video protects the video from tampering by adding cryptographic signatures to the video. |
Zipstream
Zipstream is a bitrate reduction technology, optimized for video surveillance, that reduces the average bitrate in an H.264 or H.265 stream in real time. Axis Zipstream applies a high bitrate in scenes where there are multiple regions of interest, for example, in scenes with moving objects. When the scene is more static, Zipstream applies a lower bitrate, and thereby reduces the required storage. To learn more, see Reducing the bit rate with Axis Zipstream
Optimize for storage: Turn on to minimize the bitrate while maintaining quality. The optimization does not apply to the stream shown in the web client. This can only be used if your VMS supports B-frames. Turning on Optimize for storage also turns on Dynamic GOP. Dynamic FPS (frames per second): Turn on to allow the bandwidth to vary based on the level of activity in the scene. More activity requires more bandwidth. Lower limit: Enter a value to adjust the frame rate between minimal fps and the stream default fps based on scene motion. We recommend you to use lower limit in scenes with very little motion, where the fps could drop to 1 or lower. Dynamic GOP (Group of Pictures): Turn on to dynamically adjust the interval between I-frames based on the level of activity in the scene. Upper limit: Enter a maximum GOP length, that is, the maximum number of P-frames between two I-frames. An I-frame is a self-contained image frame that is independent of other frames. |
Bitrate control
|
Audio
Include: Turn on to use audio in the video stream. Source: Select what audio source to use. Stereo: Turn on to include built-in audio as well as audio from an external microphone. |
Map calibration
Use map calibration to upload and calibrate a reference map. The result of the calibration is a reference map that displays the radar coverage in the appropriate scale, which makes it easier to see where objects are moving.
Setup assistant: Click to open the setup assistant that guides you through the calibration step by step. Reset calibration: Click to remove the current map image and radar position on the map. |
Map
Upload map: Select or drag and drop the map image you want to upload. Download map: Click to download the map. Rotate map: Use the slider to rotate the map image. |
Scale and distance on map
Distance: Add the distance between the two points you have added to the map. |
Pan and zoom map
Pan: Click on the buttons to pan the map image. Zoom: Click on the buttons to zoom in or out on the map image. Reset pan and zoom: Click to remove the pan and zoom settings. |
Radar position
Position: Click on the buttons to move the radar on the map. Rotation: Click on the buttons to rotate the radar on the map. |
Exclude zones
An exclude zone is an area in which moving objects are ignored. Use exclude zones if there are areas inside a scenario that trigger a lot of unwanted alarms. : Click to create a new exclude zone. To modify an exclude zone, select it in the list. Track passing objects: Turn on to track objects that pass through the exclude zone. The passing objects keep their track IDs and are visible throughout the zone. Objects that appear from within the exclude zone will not be tracked. Zone shape presets: Select the initial shape of the exclude zone.
|
Scenarios
A scenario is a combination of triggering conditions, as well as scene and detection settings. : Click to create a new scenario. You can create up to 20 scenarios.
For Line crossing:
|
Overlays
: Click to add an overlay. Select the type of overlay from the dropdown list:
Widget: Meter: Show a bar chart that displays the most recently measured data value.
|
Radar PTZ autotracking
Pair the radar with a PTZ camera to use radar autotracking. To establish the connection, go to System > Edge-to-edge.
Configure initial settings: Camera mounting height: The distance from the ground to the height of the mounted PTZ camera. Pan alignment: Pan the PTZ camera so that it points in the same direction as the radar. Click on the IP address of the PTZ camera to access it. Save pan offset: Click to save the pan alignment. Ground incline offset: Use the ground incline offset to fine tune the camera’s tilt. If the ground is sloped, or if the camera is not mounted horizontally, the camera may aim too high or too low when tracking an object. Done: Click to save your settings and continue with the configuration. |
Configure PTZ autotracking: Track: Select if you want to track humans, vehicles and/or unknown objects. Tracking: Turn on to start tracking objects with the PTZ camera. The tracking automatically zooms in on an object, or a group of objects, to keep them in the view of the camera. Object switching: If the radar detects multiple objects that won't fit in the PTZ camera's view, the PTZ camera tracks the object that the radar gives the highest priority, and ignores the others. Object hold time: Determines for how many seconds the PTZ camera should track each object. Return to home: Turn on to make the PTZ camera return to its home position when the radar no longer tracks any objects. Return to home timeout: Determines how long the PTZ camera should stay at the tracked objects last known position before returning to home. Zoom: Use the slider to fine tune the zoom of the PTZ camera. Reconfigure installation: Click to clear all settings and go back to the initial configuration. |
Autocalibration
Elevation
Status: Shows if calibration data is available or not. The camera and radar collects calibration data continuously. Autocalibration: Turn on to autocalibrate the scene. The autocalibration occurs as soon as calibration data is available. Check the status for availability.
Reset: Resets the autocalibration and the gathered calibration data. Show elevation pattern: Turn on to visualize the calibration. Shows the vertical distance from the ground up to the camera in a pattern of colored dots. The pattern is only visible on this page, not in the video or radar stream. Show color legend: Turn on to show a legend containing the colors of the elevation pattern and the vertical distance that each color represent. The legend is only visible on this page, not in the video or radar stream. Color: Select the colors for the elevation pattern. Show reference area: Turn on to show the area which the calibration is based on. The area is only visible on this page, not in the video or radar stream. |
Azimuth
Status: Shows if calibration data is available or not. The camera and radar collects calibration data continuously. Autocalibration: Turn on to autocalibrate the scene. The autocalibration occurs as soon as calibration data is available. Check the status for availability. Reset: Resets the autocalibration and the gathered calibration data. |
Analytics
AXIS Object Analytics
Start: Click to start AXIS Object Analytics. The application will run in the background, and you can create rules for events based on the application’s current settings. Open: Click to open AXIS Object Analytics. The application opens up in a new browser tab where you can configure its settings. Not installed: AXIS Object Analytics is not installed on this device. Upgrade AXIS OS to the latest version to get the latest version of the application. |
AXIS Image Health Analytics
Start: Click to start AXIS Image Health Analytics. The application will run in the background, and you can create rules for events based on the application’s current settings. Open: Click to open AXIS Image Health Analytics. The application opens up in a new browser tab where you can configure its settings. Not installed: AXIS Image Health Analytics is not installed on this device. Upgrade AXIS OS to the latest version to get the latest version of the application. |
Metadata visualization
The camera detects moving objects and classes them according to object type. In the view, a classified object has a colored bounding box around it along with its assigned id. Id: A unique identification number for the identified object and the type. This number is shown in both the list and the view. Type: Classifies a moving object as Human, Face, Car, Bus, Truck, Bike, or License Plate. The color of the bounding box depends on the type classification. Confidence: The bar indicates the level of confidence in the classification of the object type. |
Audio
Device settings
Input: Turn on or off audio input. Shows the type of input.
Input type: Select the type of input, for instance, if it’s internal microphone or line. Power type: Select power type for your input. Apply changes: Apply your selection. Echo cancellation: Turn on to remove echoes during two-way communication. Separate gain controls: Turn on to adjust the gain separately for the different input types. Automatic gain control: Turn on to dynamically adapt the gain to changes in the sound. Gain: Use the slider to change the gain. Click the microphone icon to mute or unmute. |
Output: Shows the type of output.
Gain: Use the slider to change the gain. Click the speaker icon to mute or unmute. |
Stream
Encoding: Select the encoding to use for the input source streaming. You can only choose encoding if audio input is turned on. If audio input is turned off, click Enable audio input to turn it on. |
Audio clips
Add clip: Add a new audio clip. You can use .au, .mp3, .opus, .vorbis, .wav files. Play the audio clip. Stop playing the audio clip. The context menu contains:
|
Audio enhancement
Input
Ten Band Graphic Audio Equalizer: Turn on to adjust the level of different frequency bands within an audio signal. This feature is for advanced users with audio configuration experience. Talkback range: Choose the operational range to gather audio content. An increase to the operational range cause a reduction of simultaneous two-way communication capabilities. Voice enhancement: Turn on to enhance the voice content in relation to other sounds. |
Recordings
Ongoing recordings: Show all ongoing recordings on the device. Start a recording on the device. Choose which storage device to save to. Stop a recording on the device. Triggered recordings will end when manually stopped or when the device is shut down. Continuous recordings will continue until manually stopped. Even if the device is shut down, the recording will continue when the device starts up again. |
Play the recording. Stop playing the recording. Show or hide information and options about the recording. Set export range: If you only want to export part of the recording, enter a time span. Note that if you work in a different time zone than the location of the device, the time span is based on the device’s time zone. Encrypt: Select to set a password for exported recordings. It will not be possible to open the exported file without the password. Click to delete a recording. Export: Export the whole or a part of the recording. |
Click to filter the recordings. From: Show recordings done after a certain point in time. To: Show recordings up until a certain point in time. Source: Show recordings based on source. The source refers to the sensor. Event: Show recordings based on events. Storage: Show recordings based on storage type. |
Apps
Add app: Install a new app. Find more apps: Find more apps to install. You will be taken to an overview page of Axis apps. Allow unsigned apps: Turn on to allow installation of unsigned apps. Allow root-privileged apps: Turn on to allow apps with root privileges full access to the device. View the security updates in AXIS OS and ACAP apps. Note The device’s performance might be affected if you run several apps at the same time. Use the switch next to the app name to start or stop the app. Open: Access the app’s settings. The available settings depend on the application. Some applications don’t have any settings. The context menu can contain one or more of the following options:
|
System
Time and location
Date and time
The time format depends on the web browser’s language settings.
We recommend you synchronize the device’s date and time with an NTP server.
Synchronization: Select an option for the device’s date and time synchronization.
Time zone: Select which time zone to use. Time will automatically adjust to daylight saving time and standard time.
Note The system uses the date and time settings in all recordings, logs, and system settings. |
Device location
Enter where the device is located. Your video management system can use this information to place the device on a map.
|
Regional settings
Sets the system of measurement to use in all system settings.
Metric (m, km/h): Select for distance measurement to be in meters and speed measurement to be in kilometers per hour. U.S. customary (ft, mph): Select for distance measurement to be in feet and speed measurement to be in miles per hour. |
Network
IPv4
Assign IPv4 automatically: Select to let the network router assign an IP address to the device automatically. We recommend automatic IP (DHCP) for most networks. IP address: Enter a unique IP address for the device. Static IP addresses can be assigned at random within isolated networks, provided that each address is unique. To avoid conflicts, we recommend you contact your network administrator before you assign a static IP address. Subnet mask: Enter the subnet mask to define what addresses are inside the local area network. Any address outside the local area network goes through the router. Router: Enter the IP address of the default router (gateway) used to connect devices that are attached to different networks and network segments. Fallback to static IP address if DHCP isn’t available: Select if you want to add a static IP address to use as fallback if DHCP is unavailable and can’t assign an IP address automatically. Note If DHCP isn’t available and the device uses a static address fallback, the static address is configured with a limited scope. |
IPv6
Assign IPv6 automatically: Select to turn on IPv6 and to let the network router assign an IP address to the device automatically. |
Hostname
Assign hostname automatically: Select to let the network router assign a hostname to the device automatically. Hostname: Enter the hostname manually to use as an alternative way of accessing the device. The server report and system log use the hostname. Allowed characters are A–Z, a–z, 0–9 and -. |
DNS servers
Assign DNS automatically: Select to let the DHCP server assign search domains and DNS server addresses to the device automatically. We recommend automatic DNS (DHCP) for most networks. Search domains: When you use a hostname that is not fully qualified, click Add search domain and enter a domain in which to search for the hostname the device uses. DNS servers: Click Add DNS server and enter the IP address of the DNS server. This provides the translation of hostnames to IP addresses on your network. |
HTTP and HTTPS
HTTPS is a protocol that provides encryption for page requests from users and for the pages returned by the web server. The encrypted exchange of information is governed by the use of an HTTPS certificate, which guarantees the authenticity of the server.
To use HTTPS on the device, you must install an HTTPS certificate. Go to System > Security to create and install certificates.
Allow access through: Select if a user is allowed to connect to the device through the HTTP, HTTPS, or both HTTP and HTTPS protocols. Note If you view encrypted web pages through HTTPS, you might experience a drop in performance, especially when you request a page for the first time. HTTP port: Enter the HTTP port to use. The device allows port 80 or any port in the range 1024-65535. If you are logged in as an administrator, you can also enter any port in the range 1-1023. If you use a port in this range, you get a warning. HTTPS port: Enter the HTTPS port to use. The device allows port 443 or any port in the range 1024-65535. If you are logged in as an administrator, you can also enter any port in the range 1-1023. If you use a port in this range, you get a warning. Certificate: Select a certificate to enable HTTPS for the device. |
Global proxies
Http proxy: Specify a global proxy host or IP address according to the allowed format. Https proxy: Specify a global proxy host or IP address according to the allowed format.
Note Restart the device to apply the global proxy settings.
|
Network discovery protocols
Bonjour®: Turn on to allow automatic discovery on the network. Bonjour name: Enter a friendly name to be visible on the network. The default name is the device name and MAC address. UPnP®: Turn on to allow automatic discovery on the network. UPnP name: Enter a friendly name to be visible on the network. The default name is the device name and MAC address. WS-Discovery: Turn on to allow automatic discovery on the network. LLDP and CDP: Turn on to allow automatic discovery on the network. Turning LLDP and CDP off can impact the PoE power negotiation. To resolve any issues with the PoE power negotiation, configure the PoE switch for hardware PoE power negotiation only. |
One-click cloud connection
One-click cloud connection (O3C) together with an O3C service provides easy and secure internet access to live and recorded video from any location. For more information, see axis.com/end-to-end-solutions/hosted-services.
Allow O3C:
Proxy settings: If needed, enter the proxy settings to connect to the proxy server. Host: Enter the proxy server’s address. Port: Enter the port number used for access. Login and Password: If needed, enter username and password for the proxy server. Authentication method:
Owner authentication key (OAK): Click Get key to fetch the owner authentication key. This is only possible if the device is connected to the internet without a firewall or proxy. |
SNMP
The Simple Network Management Protocol (SNMP) allows remote management of network devices.
SNMP: Select the version of SNMP to use.
Note All Axis Video MIB traps are enabled when you turn on SNMP v1 and v2c traps. For more information, see AXIS OS Portal > SNMP.
|
Security
Certificates
Certificates are used to authenticate devices on a network. The device supports two types of certificates:
These formats are supported:
Important If you reset the device to factory default, all certificates are deleted. Any pre-installed CA certificates are reinstalled. Add certificate : Click to add a certificate.
The context menu contains:
Secure keystore:
|
Network access control and encryption
IEEE 802.1x IEEE 802.1x is an IEEE standard for port-based network admission control providing secure authentication of wired and wireless network devices. IEEE 802.1x is based on EAP (Extensible Authentication Protocol). To access a network protected by IEEE 802.1x, network devices must authenticate themselves. The authentication is performed by an authentication server, typically a RADIUS server (for example, FreeRADIUS and Microsoft Internet Authentication Server). IEEE 802.1AE MACsec IEEE 802.1AE MACsec is an IEEE standard for media access control (MAC) security that defines connectionless data confidentiality and integrity for media access independent protocols. Certificates When configured without a CA certificate, server certificate validation is disabled and the device tries to authenticate itself regardless of what network it is connected to. When using a certificate, in Axis' implementation, the device and the authentication server authenticate themselves with digital certificates using EAP-TLS (Extensible Authentication Protocol - Transport Layer Security). To allow the device to access a network protected through certificates, you must install a signed client certificate on the device. Authentication method: Select an EAP type used for authentication. Client certificate: Select a client certificate to use IEEE 802.1x. The authentication server uses the certificate to validate the client’s identity. CA certificates: Select CA certificates to validate the authentication server’s identity. When no certificate is selected, the device tries to authenticate itself regardless of what network it is connected to. EAP identity: Enter the user identity associated with the client certificate. EAPOL version: Select the EAPOL version that is used in the network switch. Use IEEE 802.1x: Select to use the IEEE 802.1x protocol. These settings are only available if you use IEEE 802.1x PEAP-MSCHAPv2 as the authentication method:
These settings are only available if you use IEEE 802.1ae MACsec (Static CAK/Pre-Shared Key) as the authentication method:
|
Prevent brute-force attacks
Blocking: Turn on to block brute-force attacks. A brute-force attack uses trial-and-error to guess login info or encryption keys. Blocking period: Enter the number of seconds to block a brute-force attack. Blocking conditions: Enter the number of authentication failures allowed per second before the block starts. You can set the number of failures allowed both on page level and device level. |
Firewall
Activate: Turn on the firewall.
To make exceptions to the default policy, you can create rules that allows or denies connections to the device from specific addresses, protocols, and ports.
: Click to create another rule.
Pending rules: An overview of the latest tested rules that you are yet to confirm. Note The rules that have a time limit appear under Active rules until the displayed timer runs out, or until you confirm them. If you don't confirm them, they will appear under Pending rules once the timer runs out, and the firewall will revert to the previously defined settings. If you confirm them, they will replace the current active rules. Confirm rules: Click to activate the pending rules. Active rules: An overview of the rules you are currently running on the device. : Click to delete an active rule. : Click to delete all rules, both pending and active. |
Custom signed AXIS OS certificate
To install test software or other custom software from Axis on the device, you need a custom signed AXIS OS certificate. The certificate verifies that the software is approved by both the device owner and Axis. The software can only run on a specific device which is identified by its unique serial number and chip ID. Only Axis can create custom signed AXIS OS certificates, since Axis holds the key to sign them. Install: Click to install the certificate. You need to install the certificate before you install the software. The context menu contains:
|
Accounts
Accounts
Add account: Click to add a new account. You can add up to 100 accounts. Account: Enter a unique account name. New password: Enter a password for the account. Passwords must be 1 to 64 characters long. Only ASCII printable characters (code 32 to 126) are allowed in the password, for example, letters, numbers, punctuation, and some symbols. Repeat password: Enter the same password again. Privileges:
The context menu contains: Update account: Edit the account properties. Delete account: Delete the account. You can’t delete the root account. |
Anonymous access
Allow anonymous viewing: Turn on to allow anyone access the device as a viewer without logging in with an account. Allow anonymous PTZ operating: Turn on to allow anonymous users to pan, tilt, and zoom the image. |
SSH accounts
Add SSH account: Click to add a new SSH account.
Account: Enter a unique account name. New password: Enter a password for the account. Passwords must be 1 to 64 characters long. Only ASCII printable characters (code 32 to 126) are allowed in the password, for example, letters, numbers, punctuation, and some symbols. Repeat password: Enter the same password again. Comment: Enter a comment (optional). The context menu contains: Update SSH account: Edit the account properties. Delete SSH account: Delete the account. You can’t delete the root account. |
Virtual host
Add virtual host: Click to add a new virtual host. Enabled: Select to use this virtual host. Server name: Enter the name of the server. Only use numbers 0-9, letters A-Z, and hyphen (-). Port: Enter the port the server is connected to. Type: Select the type of authentication to use. Select between Basic, Digest, and Open ID. The context menu contains:
Disabled: The server is disabled. |
OpenID Configuration
If you can't use OpenID to sign in, use the Digest or Basic credentials you used when you configured OpenID to sign in.
Client ID: Enter the OpenID username. Outgoing Proxy: Enter the proxy address for the OpenID connection to use a proxy server. Admin claim: Enter a value for the admin role. Provider URL: Enter the web link for the API endpoint authentication. Format should be https://[insert URL]/.well-known/openid-configuration Operator claim: Enter a value for the operator role. Require claim: Enter the data that should be in the token. Viewer claim: Enter the value for the viewer role. Remote user: Enter a value to identify remote users. This assists to display the current user in the device’s web interface. Scopes: Optional scopes that could be part of the token. Client secret: Enter the OpenID password Save: Click to save the OpenID values. Enable OpenID: Turn on to close current connection and allow device authentication from the provider URL. |
Events
Rules
A rule defines the conditions that triggers the product to perform an action. The list shows all the currently configured rules in the product.
You can create up to 256 action rules.
Add a rule: Create a rule. Name: Enter a name for the rule. Wait between actions: Enter the minimum time (hh:mm:ss) that must pass between rule activations. It is useful if the rule is activated by, for example, day-night mode conditions, to avoid that small light changes during sunrise and sunset activate the rule repeatedly. Condition: Select a condition from the list. A condition must be met for the device to perform an action. If multiple conditions are defined, all of them must be met to trigger the action. For information about specific conditions, see Get started with rules for events. Use this condition as a trigger: Select to make this first condition function only as a starting trigger. It means that once the rule is activated, it remains active for as long as all the other conditions are met, no matter the state of the first condition. If you don’t select this option, the rule will simply be active whenever all the conditions are met. Invert this condition: Select if you want the condition to be the opposite of your selection. Add a condition: Click to add an additional condition. Action: Select an action from the list and enter its required information. For information about specific actions, see Get started with rules for events. |
Recipients
You can set up your device to notify recipients about events or send files.
If you set up your device to use FTP or SFTP, don’t change or remove the unique sequence number that’s added to the file names. If you do that, only one image per event can be sent.
The list shows all the recipients currently configured in the product, along with information about their configuration.
You can create up to 20 recipients.
Add a recipient: Click to add a recipient. Name: Enter a name for the recipient. Type: Select from the list:
Test: Click to test the setup. The context menu contains: View recipient: Click to view all the recipient details. Copy recipient: Click to copy a recipient. When you copy, you can make changes to the new recipient. Delete recipient: Click to delete the recipient permanently. |
Schedules
Schedules and pulses can be used as conditions in rules. The list shows all the schedules and pulses currently configured in the product, along with information about their configuration. Add schedule: Click to create a schedule or pulse. |
Manual triggers
You can use the manual trigger to manually trigger a rule. The manual trigger can, for example, be used to validate actions during product installation and configuration. |
MQTT
MQTT (Message Queuing Telemetry Transport) is a standard messaging protocol for the Internet of Things (IoT). It was designed for simplified IoT integration and is used in a wide variety of industries to connect remote devices with a small code footprint and minimal network bandwidth. The MQTT client in Axis device software can simplify integration of data and events produced in the device to systems which are not video management software (VMS). Set up the device as an MQTT client. MQTT communication is based on two entities, the clients and the broker. The clients can send and receive messages. The broker is responsible for routing messages between clients. You can learn more about MQTT in AXIS OS Portal. |
ALPN
ALPN is a TLS/SSL extension that allows for the selection of an application protocol during the handshake phase of the connection between the client and server. This is used to enable MQTT traffic over the same port that is used for other protocols, such as HTTP. In some cases, there might not be a dedicated port open for MQTT communication. A solution in such cases is to use ALPN to negotiate the use of MQTT as the application protocol on a standard port, allowed by the firewalls. |
MQTT client
Connect: Turn on or off the MQTT client. Status: Shows the current status of the MQTT client. Broker Host: Enter the hostname or IP address of the MQTT server. Protocol: Select which protocol to use. Port: Enter the port number.
ALPN protocol: Enter the ALPN protocol name provided by your MQTT broker provider. This is only applicable with MQTT over SSL and MQTT over WebSocket Secure. Username: Enter the username that the client will use to access the server. Password: Enter a password for the username. Client ID: Enter a client ID. The client identifier is sent to the server when the client connects to it. Clean session: Controls the behavior at connection and disconnection time. When selected, the state information is discarded at connect and disconnect. HTTP proxy: A URL with a maximum length of 255 bytes. You can leave the field empty if you don't want to use an HTTP proxy. HTTPS proxy: A URL with a maximum length of 255 bytes. You can leave the field empty if you don't want to use an HTTPS proxy. Keep alive interval: Enables the client to detect when the server is no longer available without having to wait for the long TCP/IP timeout. Timeout: The time interval in seconds to allow a connect to complete. Default value: 60 Device topic prefix: Used in the default values for the topic in the connect message and LWT message on the MQTT client tab, and in the publication conditions on the MQTT publication tab. Reconnect automatically: Specifies whether the client should reconnect automatically after a disconnect. Connect message Specifies if a message should be sent out when a connection is established. Send message: Turn on to send messages. Use default: Turn off to enter your own default message. Topic: Enter the topic for the default message. Payload: Enter the content for the default message. Retain: Select to keep the state of client on this Topic QoS: Change the QoS layer for the packet flow. Last Will and Testament message The Last Will Testament (LWT) lets a client provide a testament along with its credentials when connecting to the broker. If the client disconnects ungracefully at some point later (maybe because his power source died), it can let the broker deliver a message to other clients. This LWT message has the same form as an ordinary message and gets routed via the same mechanics. Send message: Turn on to send messages. Use default: Turn off to enter your own default message. Topic: Enter the topic for the default message. Payload: Enter the content for the default message. Retain: Select to keep the state of client on this Topic QoS: Change the QoS layer for the packet flow. |
MQTT publication
Use default topic prefix: Select to use the default topic prefix, that is defined in the device topic prefix in the MQTT client tab. Include topic name: Select to include the topic that describes the condition in the MQTT topic. Include topic namespaces: Select to include ONVIF topic namespaces in the MQTT topic. Include serial number: Select to include the device’s serial number in the MQTT payload. Add condition: Click to add a condition. Retain: Defines which MQTT messages are sent as retained.
QoS: Select the desired level for the MQTT publication. |
MQTT subscriptions
Add subscription: Click to add a new MQTT subscription. Subscription filter: Enter the MQTT topic that you want to subscribe to. Use device topic prefix: Add the subscription filter as prefix to the MQTT topic. Subscription type:
QoS: Select the desired level for the MQTT subscription. |
MQTT overlays
Note Connect to an MQTT broker before you add MQTT overlay modifiers. Add overlay modifier: Click to add a new overlay modifier. Topic filter: Add the MQTT topic that contains the data you want to show in the overlay. Data field: Specify the key for the message payload that you want to show in the overlay, assuming the message is in JSON format.
|
Storage
Network storage
Ignore: Turn on to ignore network storage. Add network storage: Click to add a network share where you can save recordings.
Remove network storage: Click to unmount, unbind, and remove the connection to the network share. This removes all settings for the network share. Unbind: Click to unbind and disconnect the network share. Unmount: Click to unmount the network share. Write protect: Turn on to stop writing to the network share and protect recordings from being removed. You can’t format a write-protected network share. Retention time: Select how long to keep recordings, to limit the amount of old recordings, or to comply with regulations regarding data storage. If the network storage becomes full, old recordings are removed before the selected time period passes. Tools
|
Onboard storage
Important Risk of data loss and corrupted recordings. Do not remove the SD card while the device is running. Unmount the SD card before you remove it. Unmount: Click to safely remove the SD card. Write protect: Turn on to stop writing to the SD card and protect recordings from being removed. You can’t format a write-protected SD card. Autoformat: Turn on to automatically format a newly inserted SD card. It formats the file system into ext4. Ignore: Turn on to stop storing recordings on the SD card. When you ignore the SD card, the device no longer recognizes that the card exists. The setting is only available to administrators. Retention time: Select how long to keep recordings to limit the amount of old recordings or comply with data storage regulations. When the SD card is full, it deletes old recordings before their retention time has passed. Tools
Wear trigger: Set a value for the SD card wear level at which you want to trigger an action. The wear level ranges from 0–200%. A new SD card that has never been used has a wear level of 0%. A wear level of 100% indicates that the SD card is close to its expected lifetime. When the wear-level reaches 200%, there is a high risk of the SD card malfunctioning. We recommend setting the wear trigger between 80–90%. This gives you time to download any recordings as well as replace the SD card in time before it potentially wears out. The wear trigger allows you to set up an event and get a notification when the wear level reaches your set value. |
Stream profiles
A stream profile is a group of settings that affect the video stream. You can use stream profiles in different situations, for example, when you create events and use rules to record.
Add stream profile: Click to create a new stream profile. Preview: A preview of the video stream with the stream profile settings you select. The preview updates when you change the settings on the page. If your device has different view areas, you can change the view area in the drop-down in the bottom left corner of the image. Name: Add a name for your profile. Description: Add a description of your profile. Video codec: Select the video codec that should apply for the profile. Resolution: See Stream for a description of this setting. Frame rate: See Stream for a description of this setting. Compression: See Stream for a description of this setting. Zipstream: See Stream for a description of this setting. Optimize for storage: See Stream for a description of this setting. Dynamic FPS: See Stream for a description of this setting. Dynamic GOP: See Stream for a description of this setting. Mirror: See Stream for a description of this setting. GOP length: See Stream for a description of this setting. Bitrate control: See Stream for a description of this setting. Include overlays: Select what type of overlays to include. See Overlays for information about how to add overlays. Include audio: See Stream for a description of this setting. |
ONVIF
ONVIF accounts
ONVIF (Open Network Video Interface Forum) is a global interface standard that makes it easier for end-users, integrators, consultants, and manufacturers to take advantage of the possibilities offered by network video technology. ONVIF enables interoperability between different vendor products, increased flexibility, reduced cost and future-proof systems.
When you create an ONVIF account, you automatically enable ONVIF communication. Use the account name and password for all ONVIF communication with the device. For more information see the Axis Developer Community at axis.com.
Add accounts: Click to add a new ONVIF account. Account: Enter a unique account name. New password: Enter a password for the account. Passwords must be 1 to 64 characters long. Only ASCII printable characters (code 32 to 126) are allowed in the password, for example, letters, numbers, punctuation, and some symbols. Repeat password: Enter the same password again. Role:
The context menu contains: Update account: Edit the account properties. Delete account: Delete the account. You can’t delete the root account. |
ONVIF media profiles
An ONVIF media profile consists of a set of configurations that you can use to change media stream settings. You can create new profiles with your own set of configurations or use preconfigured profiles for a quick setup.
Add media profile: Click to add a new ONVIF media profile. Profile name: Add a name for the media profile. Video source: Select the video source for your configuration.
Video encoder: Select the video encoding format for your configuration.
Note Enable audio in the device to get the option to select an audio source and audio encoder configuration. Audio source: Select the audio input source for your configuration.
Audio encoder: Select the audio encoding format for your configuration.
Audio decoder: Select the audio decoding format for your configuration.
Audio output: Select the audio output format for your configuration.
Metadata: Select the metadata to include in your configuration.
PTZ: Select the PTZ settings for your configuration.
Create: Click to save your settings and create the profile. Cancel: Click to cancel the configuration and clear all settings. profile_x: Click on the profile name to open and edit the preconfigured profile. |
Analytics metadata
Metadata producers
Lists the apps that stream metadata and the channels they use.
Producer: The app that produces the metadata. Below the app is a list of the types of metadata the app streams from the device. Channel: The channel that the app uses. Select to enable the metadata stream. Deselect for compatibility or resource management reasons. |
Detectors
Camera tampering
The camera tampering detector generates an alarm when the scene changes, for example, when the lens is covered, sprayed or severely put out of focus, and the time in Trigger delay has passed. The tampering detector only activates when the camera has not moved for at least 10 seconds. During this period, the detector sets up a scene model to use as a comparison to detect tampering in current images. For the scene model to be set up properly, make sure that the camera is in focus, the lighting conditions are correct, and the camera doesn’t point at a scene that lacks contours, for example, a blank wall. Camera tampering can be used as a condition to trigger actions.
Trigger delay: Enter the minimum time that the tampering conditions must be active before the alarm triggers. This can help prevent false alarms for known conditions that affect the image. Trigger on dark images: It is very difficult to generate alarms when the camera lens is sprayed, since it is impossible to distinguish that event from other situations where the image turns dark in a similar way, for example, when the lighting conditions change. Turn on this parameter to generate alarms for all cases where the image turns dark. When it’s turned off, the device doesn’t generate any alarm when the image turns dark. Note For detection of tampering attempts in static and non-crowded scenes. |
Audio detection
These settings are available for each audio input. Sound level: Adjust the sound level to a value from 0–100, where 0 is the most sensitive and 100 the least sensitive. Use the activity indicator as a guide when you set the sound level. When you create events, you can use the sound level as a condition. You can choose to trigger an action if the sound level rises above, falls below or passes the set value. |
Shock detection
Shock detector: Turn on to generate an alarm if the device is hit by an object or if it is tampered with. Sensitivity level: Move the slider to adjust the sensitivity level at which the device should generate an alarm. A low value means that the device only generates an alarm if the hit is powerful. A high value means that the device generates an alarm even with mild tampering. |
Accessories
I/O ports
Use digital input to connect external devices that can toggle between an open and closed circuit, for example, PIR sensors, door or window contacts, and glass break detectors.
Use digital output to connect external devices such as relays and LEDs. You can activate connected devices through the VAPIX® Application Programming Interface or the web interface.
Port Name: Edit the text to rename the port. Direction: indicates that the port is an input port. indicates that it’s an output port. If the port is configurable, you can click the icons to change between input and output. Normal state: Click for open circuit, and for closed circuit. Current state: Shows the current state of the port. The input or output is activated when the current state is different from the normal state. An input on the device has an open circuit when it’s disconnected or when there is a voltage above 1 VDC. Note During restart, the output circuit is open. When the restart is complete, the circuit goes back to the normal position. If you change any settings on this page, the output circuits go back to their normal positions regardless of any active triggers. Supervised: Turn on to make it possible to detect and trigger actions if someone tampers with the connection to digital I/O devices. In addition to detecting if an input is open or closed, you can also detect if someone has tampered with it (that is, cut or shorted). To supervise the connection requires additional hardware (end-of-line resistors) in the external I/O loop. |
Edge-to-edge
Audio pairing allows you to use a compatible Axis network speaker as if it’s part of the main device. Once paired, the network speaker acts as an audio out device where you can play audio clips and transmit sound.
For this feature to work with a video management software (VMS), you must first pair the device with the speaker, then add the device to your VMS.
Set a 'Wait between actions (hh:mm:ss)' limit in the event rule when you use a network paired audio device in an event rule with 'Audio detection' as condition and 'Play audio clip' as action. This will help you avoid a looping detection if the capturing microphone picks up audio from the speaker.
Audio pairing Address: Enter the host name or IP address of the network speaker. Username: Enter username. Password: Enter password for the user. Speaker pairing: Select to pair a network speaker. Clear fields: Click to clear all fields. Connect: Click to establish connection to the speaker. |
PTZ pairing allows you to pair a radar with a PTZ camera to use autotracking. Radar PTZ autotracking makes the PTZ camera track objects based on information from the radar about the objects’ positions.
PTZ pairing Address: Enter host name or IP address of the PTZ camera. Username: Enter the username of the PTZ camera. Password: Enter the password for the PTZ camera. Clear fields: Click to clear all fields. Connect: Click to establish connection to the PTZ camera. Configure radar autotracking: Click to open and configure autotracking. You can also go to Radar > Radar PTZ autotracking to configure. |
Logs
Reports and logs
Reports
Logs
|
Network trace
Important A network trace file might contain sensitive information, for example certificates or passwords. A network trace file can help you troubleshoot problems by recording activity on the network. Trace time: Select the duration of the trace in seconds or minutes, and click Download. |
Remote system log
Syslog is a standard for message logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, which indicates the software type generating the message, and assigned a severity level.
Server: Click to add a new server. Host: Enter the hostname or IP address of the server. Format: Select which syslog message format to use.
Protocol: Select the protocol to use:
Port: Edit the port number to use a different port. Severity: Select which messages to send when triggered. CA certificate set: See the current settings or add a certificate. |
Plain config
Plain config is for advanced users with experience of Axis device configuration. Most parameters can be set and edited from this page. |
Maintenance
Restart: Restart the device. This does not affect any of the current settings. Running applications restart automatically. Restore: Return most settings to the factory default values. Afterwards you must reconfigure the device and apps, reinstall any apps that didn’t come preinstalled, and recreate any events and presets. Important The only settings saved after restore are:
Factory default: Return all settings to the factory default values. Afterwards you must reset the IP address to make the device accessible. Note All Axis device software is digitally signed to ensure that you only install verified software on your device. This further increases the overall minimum cybersecurity level of Axis devices. For more information, see the white paper “Axis Edge Vault” at axis.com. AXIS OS upgrade: Upgrade to a new AXIS OS version. New releases can contain improved functionality, bug fixes, and completely new features. We recommend you to always use the latest AXIS OS release. To download the latest release, go to axis.com/support.
AXIS OS rollback: Revert to the previously installed AXIS OS version. |
Learn more
Long-distance connections
- This product supports fiber-optic cable installations through a media converter. Fiber-optic cable installations offer a number of benefits such as:
Long-distance connection
High speed
Long lifetime
Large capacity of data transmission
Electromagnetic interference immunity
Find out more about fiber-optic cable installations in the white paper “Long distance surveillance - Fiber-optic communication in network video” at axis.com/learning/white-papers.
For information about how to install the media converter see the Installation Guide for this product.
Capture modes
A capture mode is a preset configuration that defines how the camera captures images. The selected mode can affect the maximum resolution and maximum frame rate available in the device. If you use a capture mode with a lower resolution than the maximum, the field of view might be reduced. The capture mode also affects the shutter speed, which in turn affects the light sensitivity – a capture mode with a high maximum frame rate has a reduced light sensitivity, and vice versa. Note that with certain capture modes you might not be able to use WDR.
The lower resolution capture mode might be sampled from the original resolution, or it might be cropped out from the original, in which case the field of view could also be affected.
What capture mode to choose depends on the requirements for the frame rate and resolution of the specific surveillance setup. For specifications about available capture modes, see the product’s datasheet at axis.com.
Remote focus and zoom
The remote focus and zoom functionality allows you to make focus and zoom adjustments to your camera from a computer. It is a convenient way to ensure that the scene’s focus, viewing angle and resolution are optimized without having to visit the camera’s installation location.
Privacy masks
A privacy mask is a user-defined area that covers a part of the monitored area. In the video stream, privacy masks appear either as blocks of solid color or with a mosaic pattern.
You’ll see the privacy mask on all snapshots, recorded video, and live streams.
You can use the VAPIX® application programming interface (API) to hide the privacy masks.
If you use multiple privacy masks it may affect the product’s performance.
You can create several privacy masks. Each mask can have 3 to 10 anchor points.
Overlays
Overlays are superimposed over the video stream. They are used to provide extra information during recordings, such as a timestamp, or during product installation and configuration. You can add either text or an image.
The video streaming indicator is another type of overlay. It shows you that the live view video stream is live.
Streaming and storage
Video compression formats
Decide which compression method to use based on your viewing requirements, and on the properties of your network. The available options are:
Motion JPEG
To ensure support for the Opus audio codec, the Motion JPEG stream is always sent over RTP.
Motion JPEG, or MJPEG, is a digital video sequence that is made up of a series of individual JPEG images. These images are then displayed and updated at a rate sufficient to create a stream that shows constantly updated motion. For the viewer to perceive motion video the rate must be at least 16 image frames per second. Full motion video is perceived at 30 (NTSC) or 25 (PAL) frames per second.
The Motion JPEG stream uses considerable amounts of bandwidth, but provides excellent image quality and access to every image contained in the stream.
H.264 or MPEG-4 Part 10/AVC
H.264 is a licensed technology. The Axis product includes one H.264 viewing client license. To install additional unlicensed copies of the client is prohibited. To purchase additional licenses, contact your Axis reseller.
H.264 can, without compromising image quality, reduce the size of a digital video file by more than 80% compared to the Motion JPEG format and by as much as 50% compared to older MPEG formats. This means that less network bandwidth and storage space are required for a video file. Or seen another way, higher video quality can be achieved for a given bitrate.
H.265 or MPEG-H Part 2/HEVC
H.265 can, without compromising image quality, reduce the size of a digital video file by more than 25% compared to H.264.
- H.265 is licensed technology. The Axis product includes one H.265 viewing client license. Installing additional unlicensed copies of the client is prohibited. To purchase additional licenses, contact your Axis reseller.
- Most web browsers don’t support H.265 decoding and because of this the camera doesn’t support it in its web interface. Instead you can use a video management system or application supporting H.265 decoding.
How do Image, Stream, and Stream profile settings relate to each other?
The Image tab contains camera settings that affect all video streams from the product. If you change something in this tab, it immediately affects all video streams and recordings.
The Stream tab contains settings for video streams. You get these settings if you request a video stream from the product and don’t specify for example resolution, or frame rate. When you change the settings in the Stream tab, it doesn’t affect ongoing streams, but it will take effect when you start a new stream.
The Stream profiles settings override the settings from the Stream tab. If you request a stream with a specific stream profile, the stream contains the settings of that profile. If you request a stream without specifying a stream profile, or request a stream profile that doesn't exist in the product, the stream contains the settings from the Stream tab.
Bitrate control
Bitrate control helps you to manage the bandwidth consumption of your video stream.
Variable bitrate (VBR)
Variable bitrate allows the bandwidth consumption to vary depending on the level of activity in the scene. The more activity, the more bandwidth you need. With variable bitrate you are guaranteed constant image quality, but you need to make sure you have storage margins.
Maximum bitrate (MBR)
Maximum bitrate lets you set a target bitrate to handle bitrate limitations in your system. You might see a decline in image quality or frame rate as the instantaneous bitrate is kept below the specified target bitrate. You can choose to prioritize either image quality or frame rate. We recommend that you configure the target bitrate to a higher value than the expected bitrate. This gives you a margin in case there is a high level of activity in the scene.
Average bitrate (ABR)
With average bitrate, the bitrate is automatically adjusted over a longer period of time. This is so you can meet the specified target and provide the best video quality based on your available storage. Bitrate is higher in scenes with a lot of activity, compared to static scenes. You are more likely to get better image quality when in scenes with a lot of activity if you use the average bitrate option. You can define the total storage required to store the video stream for a specified amount of time (retention time) when image quality is adjusted to meet the specified target bitrate. Specify the average bitrate settings in one of the following ways:
To calculate the estimated storage need, set the target bitrate and the retention time.
To calculate the average bitrate, based on available storage and required retention time, use the target bitrate calculator.
- Target bitrate
- Actual average bitrate
- You can also turn on maximum bitrate and specify a target bitrate within the average bitrate option.
- Target bitrate
- Actual average bitrate
Applications
With applications, you can get more out of your Axis device. AXIS Camera Application Platform (ACAP) is an open platform that makes it possible for third parties to develop analytics and other applications for Axis devices. Applications can be preinstalled on the device, available for download for free, or for a license fee.
To find the user manuals for Axis applications, go to help.axis.com.
- Several applications can run at the same time but some applications might not be compatible with each other. Certain combinations of applications might require too much processing power or memory resources when run in parallel. Verify that the applications work together before deployment.
AXIS Object Analytics
AXIS Object Analytics is an analytic application that comes preinstalled on the camera. It detects objects that move in the scene and classifies them as, for example, humans or vehicles. You can set up the application to send alarms for different types of objects. To find out more about how the application works, see AXIS Object Analytics user manual.
AXIS Image Health Analytics
AXIS Image Health Analytics is an AI-based application that can be used to detect image degradations or tampering attempts. The application analyzes and learns the behavior of the scene to detect blurriness or underexposure in the image, or to detect an obstructed or redirected view. You can set up the application to send events for any of these detections, and trigger actions through the camera’s event system or third-party software.
To find out more about how the application works, see AXIS Image Health Analytics user manual.
Metadata visualization
Analytics metadata is available for moving objects in the scene. Supported object classes are visualized in the video stream through a bounding box surrounding the object, along with information about the object type and confidence level of the classification. To learn more about how to configure and consume analytics metadata, see AXIS Scene Metadata integration guide.
Cybersecurity
For product-specific information about cybersecurity, see the product's datasheet at axis.com.
For in-depth information about cybersecurity in AXIS OS, read the AXIS OS Hardening guide.
Signed OS
Signed OS is implemented by the software vendor signing the AXIS OS image with a private key. When the signature is attached to the operating system, the device will validate the software before installing it. If the device detects that the integrity of the software is compromised, the AXIS OS upgrade will be rejected.
Secure boot
Secure boot is a boot process that consists of an unbroken chain of cryptographically validated software, starting in immutable memory (boot ROM). Being based on the use of signed OS, secure boot ensures that a device can boot only with authorized software.
Axis Edge Vault
Axis Edge Vault provides a hardware-based cybersecurity platform that safeguards the Axis device. It offers features to guarantee the device’s identity and integrity and to protect your sensitive information from unauthorized access. It builds on a strong foundation of cryptographic computing modules (secure element and TPM) and SoC security (TEE and secure boot), combined with expertise in edge device security.
TPM module
The TPM (Trusted Platform Module) is a component that provides cryptographic features to protect information from unauthorized access. It is always activated and there are no settings you can change.
Axis device ID
Being able to verify the origin of the device is key to establishing trust in the device identity. During production, devices with Axis Edge Vault are assigned a unique, factory-provisioned, and IEEE 802.1AR-compliant Axis device ID certificate. This works like a passport to prove the origin of the device. The device ID is securely and permanently stored in the secure keystore as a certificate signed by Axis root certificate. The device ID can be leveraged by the customer’s IT infrastructure for automated secure device onboarding and secure device identification
Signed video
Signed video ensures that video evidence can be verified as untampered without proving the chain of custody of the video file. Each camera uses its unique video signing key, which is securely stored in the secure keystore, to add a signature into the video stream. When the video is played, the file player shows whether the video is intact. Signed video makes it possible to trace the video back to the camera origin and verifies that the video has not been tampered with after it left the camera.
To learn more about the cybersecurity features in Axis devices, go to axis.com/learning/white-papers and search for cybersecurity.
Specifications
Product overview
Do not lift the product in the cable cover.
LED indicators
- The Status LED can be configured to flash while an event is active.
- The LEDs turn off when you close the casing.
Status LED | Indication |
Unlit | Connection and normal operation. |
Green | Shows steady green for 10 seconds for normal operation after startup completed. |
Amber | Steady during startup. Flashes during device software upgrade or reset to factory default. |
Amber/Red | Flashes amber/red if network connection is unavailable or lost. |
Red | Device software upgrade failure. |
Buzzer
Buzzer signal for focus assistant
Only valid for optional P-iris, DC-iris or manual iris lenses.
Buzzer | Lens |
---|---|
Fast interval | Optimally adjusted |
Medium interval | Less optimally adjusted |
Slow interval | Poorly adjusted |
SD card slot
This device supports microSD/microSDHC/microSDXC cards.
For SD card recommendations, see axis.com.
microSD, microSDHC, and microSDXC Logos are trademarks of SD-3C LLC. microSD, microSDHC, microSDXC are trademarks or registered trademarks of SD-3C, LLC in the United States, other countries or both.
Buttons
Control button
- The control button is used for:
Resetting the product to factory default settings. See Reset to factory default settings.
Connecting to a one-click cloud connection (O3C) service over the internet. To connect, press and hold the button for about 3 seconds until the status LED flashes green.
Intrusion alarm switch
Use the intrusion alarm switch to get a notification when someone opens the device’s housing. Create a rule to make the device perform an action when the switch is activated. See Trigger an alarm if someone opens the enclosure.
Connectors
Network connector
RJ45 Ethernet connector with Power over Ethernet Plus (PoE+).
Audio connector
Audio in – 3.5 mm input for a mono microphone, or a line-in mono signal (left channel is used from a stereo signal).
Audio in – 3.5 mm input for a digital microphone, an analog mono microphone, or a line-in mono signal (left channel is used from a stereo signal).
Audio out – 3.5 mm output for audio (line level) that can be connected to a public address (PA) system or an active speaker with a built-in amplifier. A stereo connector must be used for audio out.
Audio input
1 Tip | 2 Ring | 3 Sleeve |
Unbalanced microphone (with or without electret power) or line-in | Electret power if selected | Ground |
Balanced microphone (with or without phantom power) or line-in, “hot” signal | Balanced microphone (with or without phantom power) or line-in, “cold” signal | Ground |
Digital signal | Ring power if selected | Ground |
Audio output
1 Tip | 2 Ring | 3 Sleeve |
Channel 1, unbalanced line, mono | Channel 1, unbalanced line, mono | Ground |
I/O connector
Use the I/O connector with external devices in combination with, for example, motion detection, event triggering, and alarm notifications. In addition to the 0 V DC reference point and power (12 V DC output), the I/O connector provides the interface to:
- Digital input
- For connecting devices that can toggle between an open and closed circuit, for example PIR sensors, door/window contacts, and glass break detectors.
- Supervised input
- Enables possibility to detect tampering on a digital input.
- Digital output
- For connecting external devices such as relays and LEDs. Connected devices can be activated by the VAPIX® Application Programming Interface, through an event or from the device’s web interface.
6-pin terminal block
Function | Pin | Notes | Specifications |
DC ground | 1 | 0 V DC | |
DC output | 2 | Can be used to power auxiliary equipment. Note: This pin can only be used as power out. | 12 V DC Max load = 50 mA |
Configurable (Input or Output) | 3–6 | Digital input or Supervised input – Connect to pin 1 to activate, or leave floating (unconnected) to deactivate. To use supervised input, install end-of-line resistors. See connection diagram for information about how to connect the resistors. | 0 to max 30 V DC |
Digital output – Internally connected to pin 1 (DC ground) when active, and floating (unconnected) when inactive. If used with an inductive load, e.g., a relay, connect a diode in parallel with the load, to protect against voltage transients. | 0 to max 30 V DC, open drain, 100 mA |
Example
Power connector
2-pin terminal block for DC power input. Use a Safety Extra Low Voltage (SELV) compliant limited power source (LPS) with either a rated output power limited to ≤100 W or a rated output current limited to ≤5 A.
RS485/RS422 connector
Two 2-pin terminal blocks for RS485/RS422 serial interface.
The serial port can be configured to support:
Two-wire RS485 half duplex
Four-wire RS485 full duplex
Two-wire RS422 simplex
Four-wire RS422 full duplex point to point communication
Function | Notes |
RS485/RS422 TX(A) | TX pair for RS422 and 4-wire RS485 |
RS485/RS422 TX(B) | |
RS485A alt RS485/422 RX(A) | RX pair for all modes (combined RX/TX for 2-wire RS485) |
RS485B alt RS485/422 RX(B) |
Troubleshooting
Reset to factory default settings
Possibly hazardous optical radiation is emitted from this product. It can be harmful to the eyes. Don’t stare at the operating lamp.
Reset to factory default should be used with caution. A reset to factory default resets all settings, including the IP address, to the factory default values.
To reset the product to the factory default settings:
Disconnect power from the product.
Press and hold the control button while reconnecting power. See Product overview.
Keep the control button pressed for 15–30 seconds until the status LED indicator flashes amber.
Release the control button. The process is complete when the status LED indicator turns green. If no DHCP server is available on the network, the device IP address will default to one of the following:
Devices with AXIS OS 12.0 and later: Obtained from the link-local address subnet (169.254.0.0/16)
Devices with AXIS OS 11.11 and earlier: 192.168.0.90/24
Use the installation and management software tools to assign an IP address, set the password, and access the device.
The installation and management software tools are available from the support pages on axis.com/support.
You can also reset parameters to factory default through the device’s web interface. Go to Maintenance > Factory default and click Default.
AXIS OS options
Axis offers device software management according to either the active track or the long-term support (LTS) tracks. Being on the active track means continuously getting access to all the latest product features, while the LTS tracks provide a fixed platform with periodic releases focused mainly on bug fixes and security updates.
Using AXIS OS from the active track is recommended if you want to access the newest features, or if you use Axis end-to-end system offerings. The LTS tracks are recommended if you use third-party integrations, which are not continuously validated against the latest active track. With LTS, the products can maintain cybersecurity without introducing any significant functional changes or affecting any existing integrations. For more detailed information about Axis device software strategy, go to axis.com/support/device-software.
Check the current AXIS OS version
AXIS OS determines the functionality of our devices. When you troubleshoot a problem, we recommend that you to start by checking the current AXIS OS version. The latest version might contain a correction that fixes your particular problem.
To check the current AXIS OS version:
Go to the device’s web interface > Status.
Under Device info, see the AXIS OS version.
Upgrade AXIS OS
- Preconfigured and customized settings are saved when you upgrade the device software (provided that the features are available in the new AXIS OS) although this is not guaranteed by Axis Communications AB.
- Make sure the device remains connected to the power source throughout the upgrade process.
When you upgrade the device with the latest AXIS OS version in the active track, the product receives the latest functionality available. Always read the upgrade instructions and release notes available with each new release before you upgrade. To find the latest AXIS OS version and the release notes, go to axis.com/support/device-software.
Download the AXIS OS file to your computer, available free of charge at axis.com/support/device-software.
Log in to the device as an administrator.
Go to Maintenance > AXIS OS upgrade and click Upgrade.
- When the upgrade has finished, the product restarts automatically.
Technical issues, clues, and solutions
If you can’t find what you’re looking for here, try the troubleshooting section at axis.com/support.
Problems upgrading AXIS OS | |
AXIS OS upgrade failure | If the upgrade fails, the device reloads the previous version. The most common reason is that the wrong AXIS OS file has been uploaded. Check that the name of the AXIS OS file corresponds to your device and try again. |
Problems after AXIS OS upgrade | If you experience problems after the upgrade, roll back to the previously installed version from the Maintenance page. |
Problems setting the IP address | |||||
The device is located on a different subnet | If the IP address intended for the device and the IP address of the computer used to access the device are located on different subnets, you cannot set the IP address. Contact your network administrator to obtain an IP address. | ||||
The IP address is being used by another device | Disconnect the Axis device from the network. Run the ping command (in a Command/DOS window, type
| ||||
Possible IP address conflict with another device on the same subnet | The static IP address in the Axis device is used before the DHCP server sets a dynamic address. This means that if the same default static IP address is also used by another device, there may be problems accessing the device. |
The device can’t be accessed from a browser | ||||||||||||
Can’t log in | When HTTPS is enabled, ensure that the correct protocol (HTTP or HTTPS) is used when attempting to log in. You may need to manually type If the password for the root account is lost, the device must be reset to the factory default settings. See Reset to factory default settings. | |||||||||||
The IP address has been changed by DHCP | IP addresses obtained from a DHCP server are dynamic and may change. If the IP address has been changed, use AXIS IP Utility or AXIS Device Manager to locate the device on the network. Identify the device using its model or serial number, or by the DNS name (if the name has been configured). If required, a static IP address can be assigned manually. For instructions, go to axis.com/support. | |||||||||||
Certificate error when using IEEE 802.1X | For authentication to work properly, the date and time settings in the Axis device must be synchronized with an NTP server. Go to System > Date and time. |
The device is accessible locally but not externally | |||||||||||||||||||
To access the device externally, we recommend you to use one of the following applications for Windows®:
For instructions and download, go to axis.com/vms. |
Can’t connect over port 8883 with MQTT over SSL | |||||||||||||||||||||
The firewall blocks traffic using port 8883 as it’s deemed insecure. | In some cases the server/broker might not provide a specific port for MQTT communication. It may still be possible to use MQTT over a port normally used for HTTP/HTTPS traffic.
|
Problems with the radar-video fusion | |
The bounding box is not covering the object precisely | If there is no video analytics detection, the camera will show the projection of the radar detection in the image, which is not as accurate as a video analytics bounding box. It could also be due to elevation differences in the scene, like a sloping road, a hill, or a depression. If the box is too high or low, make sure that the installation height is set correctly. You can also improve the accuracy of the bounding boxes with the autocalibration feature. To use autocalibration, go to Radar > Autocalibration. |
The bounding box shows 1 person when in reality there are 2 people | If two people are walking close together and are only detected by the radar, they will be classified as one person and only one bounding box will appear. When they enter the analytics fusion zone, they will be accurately classified. |
The bounding box shift its position when tracking an object | When both radar and camera analytics detect the same object, or if only the camera analytics detects the object, the bounding box will be tightly drawn around the object using the camera information. If the video detection is lost, the bounding box will be drawn at the position of the radar projection, which is less accurate. Once the video detection is picked up again, the bounding box will be drawn in the correct position again. You can also improve the accuracy of the bounding boxes with the autocalibration feature. To use autocalibration, go to Radar > Autocalibration. |
I don’t get the same detection distance as the manual says |
|
How can I minimize false alarms? |
|
Radar interference | The device uses one of two radar channels. Within each channel, up to four radars can negotiate how to best use that frequency. Sometimes you may see a warning message about interference from the camera, despite this functionality. You can then manually select a channel for each device. Devices that are physically close to each other should be set to the same channel. This makes it easier for the devices to avoid interference. |
Performance considerations
The following factors are the most important to consider:
Heavy network utilization due to poor infrastructure affects the bandwidth.
Contact support
If you need more help, go to axis.com/support.