FROMDEV

Artificial Intelligence And Surveillance – Where Do We Draw the Line

The use of Artificial Intelligence (AI) in surveillance is a theme which occurs in many sci-fi movies, but it’s quickly becoming a reality in many parts of the world. AI is being used to sort through huge amounts of video data, much quicker than human eyes possibly could, to identify wanted criminals and other potential hazards. Whether that makes the world a safer place or a wee bit scarier, depends on how much you trust the people with the technology.

AI Equipped Body Cams

U.S. police are working with tech companies to introduce an AI component to their body cams in order to help them identify wanted persons, such as suspects or missing children. The algorithm they employ monitors the camera footage, and then alerts the officer filming when it detects something – or someone – of interest. The officer is in control of how they act on the information.

There have been issues raised, such as concerns about privacy and so-called ‘false stops.’ It sounds unnerving, but the reality of the implementation is not as far from current reality as some people would like to think. Most people are already included in facial recognition databases somewhere in the world. Police are currently able to search using data such as Driver’s License photos and mug shots. Where the new technology differs from these ‘classic’ approaches is that they offer a more sophisticated searching system, the ability to sort through data on the go, and real-time alerts that can be acted on by officers.

The system is not cheap. As well as substantial Research and Development costs, there are obvious administrative and IT maintenance costs. Not for the first time, Police departments are coming to terms with the facts that, to operate in these new ways, they must look at their budgets in a new light. Officers are already equipped with mobile phones, which use relatively inexpensive plans. As well as their police issue radio devices. This new facility, however, requires the transmission of literally hundreds of images a day over mobile networks and consumes, on average 6GB per day of 4G mobile broadband data.

What AI Can Help Avoid

Currently, manual identification is often a lengthy, imprecise process that is open to individual bias and human limitations like boredom when it comes to targeting possible security threats. Often the security process includes scans of bags and body scans, multiple checkpoints, visually matching ID to facial features, and even manual pat-downs that some find extremely confronting.

Another concern is that people who pose a threat to security can easily avoid detection from human security by disguising their features, and by slipping past distracted security personnel. It is impossible for human eyes to maintain watchful attention without ever losing focus.

Artificial Intelligence can make the security process easier for most people, by flagging potential people of concern by the use of facial recognition. Other possible future innovations could measure heart rate and stress levels to find people with unusual physical responses for further screening.

Chinese Security

While Western countries often hold back and engage in lengthy debate about whether something should be done, China tends to simply go ahead and do it. Chinese companies have been using facial recognition as a feature of security protocols for some time, but have pulled ahead of the pack by also employing early forms of AI to analyze the data that the cameras and software capture.

Some Chinese railway police have smart glasses that feature built-in facial recognition technology to allow them to identify suspects in the crowd of commuters. Facial recognition technology is being used to keep an eye on citizens, from watching for suspects to spotting minor infractions like jay-walking.

SenseTime, a Chinese AI company creating and employing security systems in their own workplace, can scan the street and come back with information on the people and things in the footage – the technology offers an approximate age, gender and clothing of the people, and descriptions of the cars and their license plates as they pass by. The more data the system has, the better it is able to identify the people in the footage – the system only gives general estimates of people in the public, whereas, in the sensitive office, people are personally identified as they arrive at work and monitored as they move around the building.

Safe Or Scary?

The combination of technological sophistication and governmental support means that China has become an early adopter of these AI security systems, but it seems as though the U.S. might not be too far behind. If you ask people involved in the projects, they are only intended to keep citizens safe and are simply a mobile and more sophisticated extension of similar software that has been in use for some time.

These systems are still very reliant on human checks, simply providing an alert that is analyzed and acted upon by a trained officer. However, the potential for abuse and error has been raised as an area of concern. Balancing privacy with increased ability to track persons of interest has always been a delicate balancing act, and the line between the two priorities could get a lot thinner as this technology develops.

Exit mobile version