Genetec evaluates implications of Large-Scale Artificial Intelligence Language models for physical security

 AI offers great potential for automation and cost savings. But is it safe?

Large-scale Language Models (LLMs) are rapidly transforming the global landscape. In just a few months since the launch of OpenAI’s artificial intelligence (AI) chatbot, ChatGPT, more than 100 million users have embraced this platform, consolidating it as one of the fastest-growing consumer applications in recent history. The versatility of LLMs, empowering tasks from answering questions and explaining complex topics to writing scripts and code, has generated intense enthusiasm and global debate about the scope and implications of this AI technology.

“While LLMs have become a hot topic recently, it’s worth noting that the technology has been around for a long time. While advances are ongoing, LLMs and other AI tools are now creating opportunities to generate greater automation in various tasks. Having a well-grounded understanding of AI’s limitations and potential risks is essential,” believes Ueric Melo, Genetec’s Cybersecurity and Privacy expert for Latin America and the Caribbean.

When weighing up the risks of LLMs, it is important to consider that they are trained to satisfy the user as a priority and use an unsupervised AI training method to feed a large dataset from the Internet. This means that the responses are not always accurate, truthful or unbiased, which is always dangerous in a security context. This can lead to ‘AI hallucinations’, where the model generates answers that seem plausible but are not based on real-world facts or data. The use of LLMs also poses serious risks in terms of privacy and confidentiality, as their learning includes confidential information about individuals and organizations, with each text input is used to train the subsequent version.

In physical security, AI is already being used in interesting ways: to expedite investigations by searching for people and vehicles with operator-defined characteristics in video recordings; automating people counting; detecting license plates; improving cybersecurity.

As AI algorithms process large volumes of data quickly and accurately, the technology becomes an increasingly crucial tool for physical security solutions. However, its ability to use personal data also expands, which can impact privacy. Therefore, AI can inadvertently generate biased results based on different perspectives, affecting decisions that ultimately may result in discrimination. In other words, despite its revolutionary potential for transformation, its implementation requires responsibility. This is why Genetec™ has established three pillars that guide its use: privacy and data governance – usage complies with relevant data protection regulations; transparency and impartiality; and human-driven decisions – its AI models do not make critical decisions on their own, a human being must always be informed and have the final say.

More information at https://www.genetec.com/br

*Source: Genetec