- Yaro on AI and Tech Trends
- Posts
- AI Agents Become New Security, Weak Link
AI Agents Become New Security, Weak Link
Plus: US Government Approves AI Companies for Federal Contracts
Get in Front of 50k Tech Leaders: Grow With Us
Midweek Fellows, but that's no reason to start getting lazy and delegating all your work to AI agents; they expose your sensitive data more than you may think, and they don't truly think. OpenAI makes a U-turn and releases an open model you can run locally, while the government makes it even easier for three tech behemoths to land AI federal contracts. Let's dive in and, as always, stay curious.
OpenAI Releases First Open-Weight Models in 5 Years
US Government Approves AI Companies for Federal Contracts
AI Tools
AI Agents Become New Security, Weak Link
Learning Corner - Personal Prompt
Training Generative AI? It starts with the right data.
Your AI is only as good as the data you feed it. If you're building or fine-tuning generative models, Shutterstock offers enterprise-grade training data across images, video, 3D, audio, and templates—all rights-cleared and enriched with 20+ years of human-reviewed metadata.
With 600M+ assets and scalable licensing, our datasets help leading AI teams accelerate development, simplify procurement, and boost model performance—safely and efficiently.
Book a 30-minute discovery call to explore how our multimodal catalog supports smarter model training. Qualified decision-makers will receive a $100 Amazon gift card.
For complete terms and conditions, see the offer page.
📰 AI News and Trends
Walmart Creates AI 'Super Agents' to Manage Its Growing Fleet of Worker Bots
OpenAI in Talks for Share Sale at $500 Billion Valuation
Perplexity raised another $200M at $20B now
Google DeepMind is releasing a new version of its AI “world” model, called Genie 3, capable of generating 3D environments that users and AI agents can interact with in real time.
ElevenLabs launches Eleven Music, an AI tool that generates full songs from text prompts with commercial licensing.
🌐 Other Tech news
Aurora begins nighttime driverless operations. The company said nighttime driving, which it started in July, “doubles truck utilization potential
China tests out stablecoins amid fears of capital outflows
Sales of Ozempic have dropped; Rival Mounjaro seems to be more effective
OpenAI Releases First Open-Weight Models in 5 Years
OpenAI has released two open-weight language models, gpt-oss-120b and gpt-oss-20b, marking its first such release since GPT-2 in 2019. These models can run locally on consumer devices and be fine-tuned for specific purposes, representing a strategic shift for the company.

Key Features:
Both models use chain-of-thought reasoning (similar to OpenAI's o1 model)
Can browse the web, execute code, and function as AI agents
The smaller 20b model runs on devices with 16GB+ memory
Available free on Hugging Face under Apache 2.0 license (allowing commercial use)
Performance & Competition:
OpenAI says gpt-oss-120b performs similarly to its proprietary o3 and o4-mini models
The release appears to be a response to DeepSeek's recent open-weight model that stunned Silicon Valley
Positions OpenAI to compete with Meta's Llama series in the open-weight space
CEO Sam Altman emphasized wanting open AI innovation to happen in the US "based on democratic values." The company sees these open models as complementary to its paid services rather than competition, offering advantages like offline operation and firewall compatibility. The release was delayed for additional safety testing, including evaluations of potential misuse by bad actors.
🧠 Learning Corner
Just try this prompt on the AI models you use the most, let us know what you think of the feedback:
"Based on all our conversations, tell me what you understand about who I am, what I do, how I like to communicate, and what I typically need from you. Be specific."
US Government Approves AI Companies for Federal Contracts

The General Services Administration (GSA) is adding OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude to its approved vendor list, enabling widespread AI adoption across civilian federal agencies.
Federal agencies can now quickly procure these AI tools through the GSA's Multiple Award Schedule without months of individual negotiations. The GSA used its buying power to secure deep discounts, similar to existing deals with Adobe and Salesforce.
AI will penetrate every inch of our digital world as agencies plan to use AI for processing patent applications, detecting tax fraud, reviewing grant submissions, developing customer service chatbots, and summarizing public comments on regulations. The Treasury Department and Office of Personnel Management have already expressed interest.
The approval comes days after President Trump signed executive orders mandating federal agencies only procure language models "free from ideological bias." GSA officials say enforcing the ban on "woke AI" will be handled agency-by-agency, while emphasizing the US needs to win the AI race.
GSA says more AI vendors will be considered; these three were simply furthest along in procurement.
🧰 AI Tools
Download our list of 1000+ Tools for free.
AI Agents Become New Security Weak Link
New studies reveal major AI vulnerabilities: LLMs provide incorrect login URLs one-third of the time, while AI agents operate with full user privileges but zero security awareness.

AI agents (used by 79% of organizations) have access to all enterprise apps and passwords, but can't recognize security threats
When asked for sign-in URLs, LLMs return wrong sites 34% of the time, including fake, parked, or unrelated domains
Smaller brands are hit harder as LLMs are more likely to "hallucinate" their login pages
AI agents have the same system privileges as human users, including access to all logins and passwords, but lack the security awareness that comes from employee training. They blindly complete tasks without recognizing threats. Worse, storing these credentials on agent servers means your sensitive data is exposed to those platforms and any third parties with cloud access, multiplying the attack surface.
We suggest implementing browser-native detection for malicious activity and restricting agent permissions immediately. Long-term, develop systems that differentiate between human and AI users in real-time. Most critically: never delegate personal or highly sensitive access to AI agents. They aren't human, won't exercise judgment, and can't be trained to recognize when they're being exploited.
Reply