In partnership with

👋 Welcome! Not all power looks political. Some of it lives inside software. Today we unpack the global debate over who sets AI’s limits—and whether anyone really should.

AI & TECH

EV Radiation Fears Don’t Hold — Tests by ADAC and the Federal Office for Radiation Protection found electric cars emit electromagnetic fields well below safety limits during driving and charging. Spikes appear near footwells or heated seats, not bodies. Even fast charging proved safe, with petrol cars registering higher readings.

Missouri Age Checks Break Internet — Websites are asking people in Missouri to prove their age under a law meant for explicit content. Critics at the Electronic Frontier Foundation warn vague rules invite overblocking, privacy risks, and data leaks as companies either lock out users, hoard IDs, or abandon the state.

Users Prefer Fixing Over Generating New stats from OpenAI show people mostly use ChatGPT for practical tasks, not sci-fi stunts: editing text, getting advice, and uploading photos beat image generation. Some 800 million users a week treat it like digital utility, closer to Google than gadget, handling life admin daily.

The AI Race Just Went Nuclear — Own the Rails.

Meta, Google, and Microsoft just reported record profits — and record AI infrastructure spending:

  • Meta boosted its AI budget to as much as $72 billion this year.

  • Google raised its estimate to $93 billion for 2025.

  • Microsoft is following suit, investing heavily in AI data centers and decision layers.

While Wall Street reacts, the message is clear: AI infrastructure is the next trillion-dollar frontier.

RAD Intel already builds that infrastructure — the AI decision layer powering marketing performance for Fortune 1000 brands. Backed by Adobe, Fidelity Ventures, and insiders from Google, Meta, and Amazon, the company has raised $50M+, grown valuation 4,900%, and doubled sales contracts in 2025 with seven-figure contracts secured.

This is a paid advertisement for RAD Intel made pursuant to Regulation A+ offering and involves risk, including the possible loss of principal. The valuation is set by the Company and there is currently no public market for the Company's Common Stock. Nasdaq ticker “RADI” has been reserved by RAD Intel and any potential listing is subject to future regulatory approval and market conditions. Investor references reflect factual individual or institutional participation and do not imply endorsement or sponsorship by the referenced companies. Please read the offering circular and related risks at invest.radintel.ai.

CAREER & GROWTH

Texas Targets Shein Over Safety — Texas Attorney Paxton has opened a probe into Shein, alleging unsafe products, misleading safety claims, and unethical labor practices, plus data-privacy issues. The move follows French action and EU scrutiny after illicit items surfaced. Shein says it’s cooperating, but faces global pressure.

Data Centers Make Builders Rich — AI’s data-center gold rush is rewriting trades. Builders report 30% pay bumps, six-figure salaries, and perks from bonuses to hot meals as hyperscalers like Amazon Google, and MS erect server cities. With a 439k-worker shortage, supervisors suddenly wield leverage to match nationwide.

Strong People Ignore Uncontrollable Noise — Psychotherapist Amy Morin says resilient people stop obsessing over outcomes they can’t control, and redirect focus to effort, boundaries, and skills. Her take: audit worries into control circles, choose one action daily, and release the rest freeing mental bandwidth for performance.

JOBS & OPPORTUNITIES

Today's opportunities are brought to you by RightSide.

____________________________________

Ui & Product Development Intern | Internship | Parrots | Remote

Senior Systems Programmer | Full Time | League City, TX | In Person

Illustrator Intern | Internship | Better Letter Books | Remote

Cloud Full Stack Engineer | Full Time | Dallas, TX | In Person

Director of Special Projects/CoS | Full Time | New York, NY | In Person

Marketing Specialist | Part Time | Phoenix Eternal | Remote

WMS QA Lead | Full Time | QualiTest | Remote

____________________________________

Looking for more bandwidth for your business? Hire with RightSide to build your dream team here!

MONEY IN MOTION

Tariffs Choke US Factory ProfitsU.S. manufacturing is shrinking as the Institute for Supply Management PMI sinks to 48.2, signaling falling orders and margin pressure. Tariffs jack up input costs, force layoffs, and delay capex, while only AI-linked niches grow. Rising costs force the Fed to balance inflation and growth.

Leak Rocks UK Bond MarketsA premature forecast release by the Office for Budget Responsibility sent gilt yields swinging, overshadowing the Autumn Budget and forcing chair Richard Hughes to quit. Chancellor Rachel Reeves pledged safeguards and independence as Treasury reviews controls.

Goldman Buys Innovator in $2B Goldman Sachs agreed to buy Innovator Capital Management for $2B, adding 159 defined-outcome ETFs and $28B in assets. CEO David Solomon says active ETFs are surging as the bank pivots from consumer banking toward asset and wealth management to boost recurring fee revenue.

BIG THINK

AI Needs Rules — But Whose?: Why governments, companies, and you are in a quiet power struggle

In 2025, as AI systems become more pervasive — shaping what we read, how we work, how services are offered, and how decisions are made — the question of whether the state should step in to regulate them feels ever more urgent. On one side, AI promises massive benefits: more efficient public services, creative tools for individuals, and new opportunities for innovation. But without any guardrails, that power carries serious risks — from biased or opaque decisions, to misinformation, to threats to privacy or civil rights. For many, this means that state oversight isn’t just optional; it may be essential to ensure AI develops in a way that protects society broadly.

Recent months illustrate how contested this path is. In the U.S., a coalition of 35 state attorneys general — from both major parties — recently urged Congress not to block state-level AI laws. They argue that, in the absence of comprehensive federal legislation, states must retain the right to enact rules aimed at protecting residents from harms such as fraud, deepfakes or discriminatory algorithms. Meanwhile, proposals in Congress and from the administration have flirted with sweeping provisions to preempt state AI laws — prompting resistance and legal-political tug-of-wars. On the other side of the globe, the European Commission has proposed delaying strict “high-risk AI” regulations until 2027, part of a package that also relaxes some data-access and privacy constraints — a move critics say benefits large tech companies at the expense of user protections.

These developments show the trade-offs implicit in state control: regulation can help ensure accountability, discourage misuse, protect privacy and human rights, and give citizens some transparency over powerful systems. But there are real concerns that overly rigid or blanket regulation — especially if imposed without nuance — could stifle innovation, favor large incumbents over smaller or independent developers, and slow progress or creativity in AI. This is particularly salient when regulation lags behind technological change, or is applied uniformly without distinguishing between low-risk and high-risk uses.

So perhaps the wiser path lies in smart, adaptive governance — a framework that doesn’t treat all AI as equal, but rather calibrates oversight based on risk, context, and potential harm. For example: requiring transparency for systems used in public services or high-stakes decision-making; mandating disclosure when AI is used; giving individuals rights to challenge algorithmic outcomes; and allowing lighter-touch oversight for lower-risk creative or supportive tools. Such an approach can maintain innovation while safeguarding rights and trust.

For Gen Z and millennials — generations raised on the promises and pitfalls of the internet — the stakes are high. AI will shape how you work, create, communicate, and consume. The question isn’t simply whether the state should control AI — but how. As citizens, users, creators, and voters, you have the opportunity (and perhaps the duty) to push for governance that is thoughtful, fair, and forward-looking. The future of AI should not just be “fast” or “smart” — it should be just, inclusive, and under democratic control.

Actionable Insights

  1. Track the rules that shape your tools—follow at least one reliable source on AI regulation so you understand how laws affect the apps you rely on.

  2. Build literacy, not just usage—treat AI like financial or media literacy: something you actively understand, not passively consume.

The New Enterprise Approach to Voice AI Deployment

A practical, repeatable lifecycle for designing, testing, and scaling Voice AI. Learn how BELL helps teams deploy faster, improve call outcomes, and maintain reliability across complex operations.

PROMOTE YOUR BRAND TO EVERY PATH READERS

Reach a highly targeted audience of tens of thousands of young professionals who turn to Every Path for trusted insights on AI, tech, careers, and finance.

Partner with us to put your message in front of entrepreneurs, founders, developers, investors — Gen Z and millennial decision-makers looking for tools to thrive in today’s changing world of work. Ready to grow your brand with us?

THE NUMBER

microsievert is the radiation you get from one banana, making your snack technically more radioactive than your phone. But you’d need to eat millions for it to matter!

WISDOM

“Enjoyment empowers effort. Doing is the fruit of delighting. Performance is energized by pleasure.”

Login or Subscribe to participate

Like and hit reply if you have any feedback. We’d love to hear from you!

Keep Reading

No posts found