The original post: /r/cybersecurity by /u/Nearby_Maybe_2110 on 2024-10-09 01:38:41.
Hey everyone,
With AI agents popping up more in companies—especially across different teams and departments—I’ve been thinking about how we handle their security. These agents, built on large language models and hooked into various tools, have access to tons of data and can automate tasks like never before. But that also means they interact with way more systems than a regular employee might.
So, how do we keep them secure at every point?
Having worked in network and cyber security, I feel like we need to adapt our usual security measures for these AI agents. Things like authenticating and authorizing the agents themselves, logging what they do, maybe even using multi-factor authentication when they access different datasets. If their actions vary a lot, context-driven security could help too.
The goal is to use our existing security setups but apply them in new ways to these agents as they become more common and start interacting outside the company too.
What do you all think? How should we be securing AI agents in our workplaces?