🤷‍♂️Nobody Knows How AI Works.

But Anthropic may have an idea. + OpenAI's Nightmares and How do Criminals leverage AI to enhance illegal activities?

Get in Front of 50k Tech Leaders: Grow With Us

Greetings Team, This is what yo need to know today…

  • 📰 News and Trends.

  • Nobody Knows How AI Works, but “Anthropic” May Have Hints.

  • Does the Scarlett Johansson Incident Make OpenAI Look Desperate or was it a cunning move for publicity?

  • 🧰 AI Tools (Text to Speech)

  • How do Criminals leverage AI to enhance illegal activities?

  • Is The AI ‘Safety Movement’ Dead (Bloomberg)

  • The RealReal, a marketplace for luxury goods, is using AI to find fakes (BI)

  • Top Chinese chipmakers SMIC and CXMT are aggressively localizing their supply chains to counter U.S. export controls.

  • Wearable AI Startup Humane Explores Potential Sale (Bloomberg)

  • The European Union’s AI Act, the first of its kind, will set the global standard for safeguards on artificial intelligence (FC)

  • The Flybridge AI Index is here.

🌐 Other Tech news

  • US says cyberattacks against water supplies are rising, and utilities need to do more to stop them (AP)

  • TikTok plans global layoffs in operations and marketing (CNN) Pixar also lays off 14% of its workforce (Variety)

  • Comcast has set a $15 price tag for a streaming bundle that offers Netflix, Peacock, and Apple TV+ (TheWrap)

  • Meta, Match, Coinbase, and others team up to fight online fraud and crypto scams (TC)

Nobody Knows How AI Works, but “Anthropic” May Have Hints.

Researchers at the AI company Anthropic have made progress in understanding the inner workings of large language models, potentially helping to prevent misuse and mitigate threats. By using "dictionary learning," they identified approximately 10 million patterns, or "features," within their model, Claude 3 Sonnet. These features correlate with specific topics and concepts, such as cities, scientific terms, and abstract ideas like deception. Manipulating these features can change the AI's behavior, offering a way to control biases and safety risks. While this represents significant progress, fully understanding and controlling large AI models remains a complex and costly challenge.

Does the Scarlett Johansson Incident Make OpenAI Look Desperate or was it a cunning move for publicity?

Last week, OpenAI released GPT-4o, an AI "omnimodel," expecting a significant milestone. However, the company soon faced backlash for allegedly using Scarlett Johansson's voice without permission. This incident underscores criticisms of OpenAI's aggressive approach and follows the resignation of key "superalignment" team members, raising questions about its direction and integrity. Additionally, GPT-4o's tokenizer data, polluted by Chinese spam websites, has led to issues with phrases related to pornography and gambling, potentially worsening AI model problems like hallucinations and misuse.

Despite these issues, the controversy may have been a strategic move for publicity. The resulting attention has boosted sales and kept OpenAI in the spotlight, leveraging controversy to maintain a competitive edge as Google and Apple close the AI technology gap.

📰 Publications I am currently reading and recommending:

  • Level Up Creators - Each week we publish tips, shortcuts, tear-downs, knowledge bombs, and easy buttons to help creators, solopreneurs, and coaches achieve the holy grail of business

  • Smart Solopreneur - A newsletter for experienced online solopreneurs, like strategists, copywriters, freelancers, and consultants.

  • Growth Forum - Learn how to build a repeatable sales process creating a pipeline of qualified deals.

🧰 AI Tools

Test to Speech I

Download over 500+ Tools free here. - To be featured, Fill out This Form.

How do Criminals leverage AI to enhance illegal activities?

Key methods include:

1. Phishing: AI generates sophisticated, convincing phishing emails, bypassing language barriers, and leading to a spike in phishing scams.

2. Deepfake Audio and Video: AI creates realistic fake voices and videos to scam victims, exemplified by a $25 million fraud in Hong Kong.

3. Bypassing Identity Checks: Criminals use deepfakes to trick verification systems, selling these services for as low as $70.

4. Jailbreak-as-a-Service: Services like EscapeGPT allow criminals to manipulate AI models to produce harmful content.

5. Doxxing and Surveillance: AI analyzes personal data to reveal sensitive information, aiding in doxxing and surveillance.

AI can be a powerful tool for positive productivity but also enhances illegal activities. We must be vigilant with all emails, calls, and queries from banks and other institutions as sophisticated scams are increasingly common. Establish code words with your family and business partners to verify the legitimacy of emails or calls. If the other party can correctly provide the code word, you can be more confident it's genuine; otherwise, assume it's a scam.

Newsletter Recommendation:

Growth Forum - Learn how to build a repeatable sales process creating a pipeline full of qualified deals.

Secrets of Success - Learn Mental Models for success.

Subscribe to keep reading

This content is free, but you must be subscribed to Yaro on AI and Tech Trends to continue reading.

Already a subscriber?Sign In.Not now

Reply

or to participate.