Agentic AI: The Double-Edged Sword of Autonomous Technology

Published:

Artificial intelligence (AI) is expanding rapidly, and organizations are recognizing the need for a dedicated technology to drive AI strategy and adoption. Agentic AI has the power to think on its own, making caution essential. But with smart choices, autonomous technology can be transformed into an incredible force for progress. The average person makes about 35,000 decisions every day. Those aren’t just simple choices; it’s a chain of decisions that branches in different directions along the way. But circumstances always change, so a person’s decision at a particular branch point might vary depending on any number of factors. Agentic AI aims to replicate this autonomous decision-making process. Much of the AI work prior to agentic focused on large language models with a goal to give prompts to get knowledge out of unstructured data. So it’s a question-and-answer process. Agentic AI goes beyond that. It can perform complex tasks that involve multiple steps, adapting to changing circumstances.

For example, in the digital identity field, a task might involve verifying results. It sounds easy, but the steps below the surface are complex and always different based on the data set. Could agentic AI accomplish that task? Could it work through complex, dynamic branch points, make autonomous decisions, and act on them? That requires stringing logic together across thousands of decisions. Agentic AI is clearly a difficult problem to solve. Experts have spent years working with machine learning and automation technology, and they believe agentic AI is a game-changer. However, its potential to be used for fraud worries them. Fraudsters can use the technology to exploit weaknesses in security. Document verification, for instance, might seem straightforward, but it involves multiple steps, including image capture and data collection, behind the scenes. That creates a large surface area for fraudsters to probe with agentic AI.

But there are defenses. Agentic AI can be used to monitor multiple parameters for abnormal activity and even recognize itself to determine if responses are coming from a computer. It’s also possible to train agentic AI to recognize itself and determine that responses during a verification are likely coming from a computer. A penetration test that checks for ways someone could access a network is another effective defense. Organizations could use agentic AI to try to defeat themselves, much like a red team exercise. The technology could be used as a monitoring tool that watches multiple parameters for anything abnormal. By leveraging these defenses, organizations can mitigate the risks associated with agentic AI and unlock its full potential.

The convergence of use case, compliance, and fear of the unknown is a significant challenge. If agentic AI is told to onboard a customer or a business, can it do it in a way that meets compliance requirements? Business verification might sound like an ideal use case for the technology. Business sizes vary, and it’s difficult to verify across that spectrum. Beyond that, there are ultimate beneficial owners for those businesses that require identity document verification. Agentic AI could manage those separate steps and logic chains. It could take specific actions depending on the size of a business. Digital verification, though, operates with a strict set of rules. The agentic AI could onboard a business, but it might be hard-pressed to do it in a way that’s compliant because it’s not the same every time.

Regulators are going to have to decide if they will allow agentic AI in digital verification. In the industry, that might be a greater constraint than having the technology to do the task. There’s the problem of explaining what it did in a given circumstance and getting people comfortable with the technology. This lack of transparency and explainability is a significant challenge. As agentic AI becomes more prevalent, it’s essential to develop methods for explaining its decision-making process. This will help build trust in the technology and alleviate concerns around its use.

A practical approach to new technology is essential. Agentic AI will need guardrails and human oversight in the beginning. At least in its early days, the technology will be a programmed system. It can run in parallel to a person executing the same task to see if they arrive at the same conclusion, even if they follow different decision branches to get there. Oversight and testing can diminish concerns around agentic AI. While it’s prudent to proceed with caution, autonomous technology can be turned into a tremendous tool with the right decisions. By acknowledging the challenges and limitations of agentic AI, we can work towards developing a framework for its safe and effective use.

As we move forward, it’s essential to consider the broader implications of agentic AI. How will it impact the workforce? Will it augment human capabilities or replace them? What are the ethical considerations surrounding its use? These are just a few of the questions that need to be addressed. By engaging in open and honest discussions about the potential risks and benefits of agentic AI, we can work towards creating a future where autonomous technology enhances human life without compromising our values or well-being.

Related articles

spot_img