🦾The Next Era is Physical AI.

Plus: Google vs. Microsoft vs. Meta

In partnership with

Get in Front of 50k Tech Leaders: Grow With Us

The next AI era is here, and it requires living in the real world. AI needs us to wear it, move with it, and even drive it. Why? Because the next breakthrough isn’t more text, it’s massive amounts of real-world video and sensor data. Tech giants are now in a high-stakes race for both profitability and dominance, competing to capture physical data at scale through wearables, vehicles, and cameras everywhere. In this edition, we also share Physical AI tools and learning resources to help you understand and build the next phase of AI. Stay curious.

  • The Next Era is Physical AI.

  • đź§° AI Tools - Physical AI

  • Meta Vs. Microsoft vs. Google

  • 📚Learning Corner - MIT Robotics & AI

Subscribe today and get 60% off for a year, free access to our 1,500+ AI tools database, and a complimentary 30-minute personalized consulting session to help you supercharge your AI strategy. Act now as it expires in 3 days…

  • Nvidia Corp., Microsoft Corp., and Amazon.com Inc. are in discussions to invest as much as $60 billion in OpenAI and Softbank another $30B as part of a major new funding round.

  • Would you let Chrome take the wheel? Google begins rolling out Chrome’s “Auto Browse” AI agent today.

  • Meta users will start to see new AI models and products from the company in a matter of months, including AI-powered shopping assistants.

  • It’s hard to imagine a world in several years where most glasses that people wear aren’t AI glasses. Meta on the future of wearables.

  • Google is testing voice cloning and GitHub repository import for AI Studio, hinting at native audio upgrades and new tools aimed at developers.

Other Tech News

  • Snap gets serious about Specs, spins AR glasses into standalone company.

  • The rise of weather influencers. Because who watches TV anymore… They are fun, informative, and some influencers are actually meteorologists.

  • KPop Demon Hunters is officially the most-streamed movie of 2025 with 20.5 billion minutes watched.

What Will Your Retirement Look Like?

Planning for retirement raises many questions. Have you considered how much it will cost, and how you’ll generate the income you’ll need to pay for it? For many, these questions can feel overwhelming, but answering them is a crucial step forward for a comfortable future.

Start by understanding your goals, estimating your expenses and identifying potential income streams. The Definitive Guide to Retirement Income can help you navigate these essential questions. If you have $1,000,000 or more saved for retirement, download your free guide today to learn how to build a clear and effective retirement income plan. Discover ways to align your portfolio with your long-term goals, so you can reach the future you deserve.

The Next Era is Physical AI.

As large language models (LLMs) hit a training plateau in terms of the amount of text currently existing in the world that the models can swallow, inference becomes the priority for these models.

What are World models? They are neural networks that understand the dynamics of the real world, including physics and spatial properties. They can use input data, text, image, video, and movement to generate videos that simulate realistic physical environments. Physical AI developers use world models to generate custom synthetic data or downstream AI models for training robots. Physical AI simplified is the system that bridges the digital and physical worlds, allowing machines to perceive, reason, and interact with their surroundings in real time.

Humans and all animals interact with their surroundings unconsciously and without much thinking. We walk through spaces without hitting immovable objects, put our clothes on, drive, and navigate our world using our senses, and even optimize our own spaces to improve navigation. As of now, LLMs only navigate texts, images, and videos that they have as input and create outputs accordingly. World models that are trained to give machines "spatial intelligence" an internal understanding of physics, cause-and-effect, and 3D space. To train them, they ingest millions of hours of real-world video to understand motion and dynamics. By predicting subsequent events, the model can generate simulations, enabling robots to practice tasks virtually before attempting them physically. These learned capabilities are then fine-tuned for specific hardware configurations, such as autonomous vehicles or robotic appendages.

Remember the saying “Data is the new Oil”? Well, now companies with the most video data (YouTube, Meta, Tesla, and maybe the ESPNs for sports) have an upper hand in this new paradigm. But, this is just the beginning, as the battle for wearables intensifies, the data that these devices generate becomes more valuable, because Meta glasses worn by millions means hours of real world footabe used to train spatial models. Maybe these wearables will become ubiquitous and relatively cheap as similar to social media, we will become the product that provides the training data (videos) to tech companies as we wear these so-called wearables and drive cars with multiple cameras.

Major tech companies like NVIDIA, Google DeepMind, and Meta are developing world models to overcome current AI limitations, such as a lack of intuitive understanding of cause-and-effect and 3D space. Specialized startups like World Labs and AMI Labs are also working on this "spatial intelligence" to enable robots and autonomous systems to predict physical outcomes before acting, with applications in automotive, manufacturing, and entertainment industries. Startups and established companies are rushing to release wearables to get ahead of the next era. Snap just spun its wearable division into its own company, Google glasses are making a comeback, we all know Meta and RayBans devices, and OpenAI has been working on its AI device with Jony Ive.

This is just the beginning. In the next edition, I’ll break down how spatial computing, world models, and Physical AI will shape decision-making, how machines won’t just answer questions, but tell us what to do next.

📚Learning Corner

Courses & Educational Content

  • MIT – Robotics & AI
    Strong academic grounding in perception, control, and physical systems.

Meta vs. Microsoft

Microsoft spends big, but Meta and Google are emerging as the winners, so far.

  • Microsoft posted $81.3B in quarterly revenue (+17% YoY) and $38.5B in profit (+60% YoY), while pouring $37.5B into capex in a single quarter (+65% YoY), largely for AI data centers.

  • Azure grew 39% YoY, confirming strong AI demand, but capacity constraints are expected to last through 2026, pressuring margins. Shares fell >5% after hours despite the beat.

Why Meta looks like the real winner

  • Meta just revealed strong profitability with far lower AI infrastructure intensity than Microsoft.

  • Meta’s AI strategy leans heavily on open-source (Llama) and internal efficiency, driving AI-powered gains in ads without matching hyperscaler-level capex.

  • While Microsoft commits tens of billions per quarter and absorbs near-term margin risk, Meta is monetizing AI now, with operating leverage intact.

Microsoft is spending aggressively on AI infrastructure, but most AI profits today still come from enterprise products like Azure AI and Copilot, not direct consumer monetization. AI-driven capex is rising faster than near-term margins.

Meta, meanwhile, has already cracked AI monetization. AI-driven ranking and recommendation systems across Instagram, Facebook, and WhatsApp have increased engagement and ad efficiency, directly lifting ad impressions and revenue. Ads account for ~98% of Meta’s revenue, and AI is now core to that engine.

Google is embedding AI directly into Search, Chrome (AI Mode), Workspace (Gemini), and Cloud. These upgrades improve retention, ad performance, and cloud growth, keeping AI tightly linked to revenue instead of standalone products.

Meta is monetizing AI immediately, Google is reinforcing its ads + cloud flywheel, and Microsoft is still converting massive AI investment into consumer-level profit.

Your Payment Processor Is Hiding Your Revenue Leaks

Most businesses think churn = cancellations. - That’s wrong.

What your payment processor doesn’t clearly show you:

  • Failed payments silently kill 5–9% of revenue

  • Customers at risk before they cancel

  • Which products actually drive churn

  • How much revenue could be recovered every month

We built a real-time financial command center on top of Stripe and any payment processor that shows:

  • MRR, Net Revenue, Churned MRR

  • Revenue by product & customer

  • Subscription risk alerts

  • Failed payment recovery tracking

  • Client lifetime value & tenure

If you do $20k+/month, you’re almost certainly leaving money on the table.

đź§° AI Tools of The Day

Physical AI

  • DreamerV3 (by DeepMind) – A state-of-the-art world model for latent space planning and learning. Known for sample-efficient reinforcement learning.
    Good for Simulated physics, control tasks, and open-ended environments.

  • MuZero (DeepMind) – Learns dynamics and value models without knowing the environment rules up front. Great for Game environments, planning, and hybrid model-based approaches.

  • Kubernetes-enabled Isaac Gym (NVIDIA) – High-performance physics engine + world representation for robotics and control simulations. Good for Large-scale training with GPUs and multi-agent environments.

  • Unity ML-Agents (OpenAI/Unity) – Environment + world model training toolkit with built-in reinforcement learning. Good for Game-like simulations, visual world modeling, and embodied agents.

  • PETS (Probabilistic Ensembles with Trajectory Sampling) – Classic world model technique using probabilistic dynamics with uncertainty estimates. Model-based RL, planning under uncertainty.

Explore our AI Guides — From coding to photography and beyond, find step-by-step tips to put AI to work for you.

Subscribe to keep reading

This content is free, but you must be subscribed to Yaro on AI and Tech Trends to continue reading.

Already a subscriber?Sign in.Not now

Reply

or to participate.