BLOG POST

AI 日报

120 min
2025年10月1日
日报 · AI · 行业观察

ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day

Documents show that ICE has gone back on its decision to not use location data remotely harvested from peoples' phones. The database is updated every day with billions of pieces of location data.

In Unhinged Speech, Pete Hegseth Says He's Tired of ‘Fat Troops,’ Says Military Needs to Go Full AI

The Secretary of War lectured America’s generals on fitness standards, beards, and warriors for an hour.

Google Just Removed Seven Years of Political Advertising History from 27 Countries

Ahead of the European Union's Regulation on Transparency and Targeting of Political Advertising, Google's Ad Transparency Center no longer shows political ads from any countries in the EU.

18 Lawyers Caught Using AI Explain Why They Did It

Lawyers blame IT, family emergencies, their own poor judgment, their assistants, illness, and more.


OpenAI Intros Sora 2 and a Social Media App

The updated multi-modal model aims to improve realism by addressing problems such as the distortion of reality. The app has a customizable feed for discovering and remixing videos.

CoreWeave forges $14.2B Contract With Meta for AI Compute

The contract is part of a wave of big AI infrastructure deals as the tech industry looks to ensure compute power for energy-intensive AI workloads into the 2030s.

Nvidia Pushes Humanoids, Physical AI With New Tools

The upgrades are pitched as providing the "brains, body and training ground" for humanoid robots.

ServiceNow Unveils AI Experience With Agentic Features

The AI Experience offers instant access to a range of AI agents for voice, images and data.


The value gap from AI investments is widening dangerously fast

Boston Consulting Group (BCG) has found a widening chasm separating an elite of AI masters from the majority of firms struggling to generate any value from their AI investments. A study from BCG found that a mere five percent of companies are successfully achieving bottom-line value from AI at scale. In sharp contrast, 60 percent […]

The post The value gap from AI investments is widening dangerously fast appeared first on AI News.

The rise of algorithmic agriculture? AI steps in

AI is the cream of the crop in today’s tech field, with industries relying on generative AI to improve operations and boost productivity. One sector that using AI with measurable results is agriculture, with vegetable seed companies harnessing the technology to identify the best vegetable varieties out of thousands of options. This facility can help […]

The post The rise of algorithmic agriculture? AI steps in appeared first on AI News.

Inside Huawei’s Shanghai acoustics lab: Where automotive sound engineering meets science

Walking into Huawei’s Shanghai Acoustics R&D Centre, I expected a standard facility tour. What I encountered instead was a comprehensive automotive sound engineering operation that challenges the established order of in-car audio systems. The facility, which Huawei has developed since beginning serious audio research investments in 2012, houses three distinct testing environments: a fully anechoic […]

The post Inside Huawei’s Shanghai acoustics lab: Where automotive sound engineering meets science appeared first on AI News.

Rising AI demands push Asia Pacific data centres to adapt, says Vertiv

As more companies in Asia Pacific adopt artificial intelligence to boost their operations, the pressure on data centres is growing fast. Traditional facilities, built for earlier generations of computing, are struggling to keep up with the heavy energy use and cooling demands of modern AI systems. By 2030, GPU-driven workloads could push rack power densities […]

The post Rising AI demands push Asia Pacific data centres to adapt, says Vertiv appeared first on AI News.

Reply’s pre-built AI apps aim to fast-track AI adoption

Adopting AI at scale can be difficult. Enterprises around the world are discovering the pace of AI deployment is frustratingly slow as they face implementation, integration, and customisation challenges. Generative AI is undoubtedly powerful, but it can be complex, particularly for businesses starting from scratch. To help organisations overcome the hurdles associated with AI adoption, […]

The post Reply’s pre-built AI apps aim to fast-track AI adoption appeared first on AI News.


AI Inference Chip Company Cerebras Raises $1.1 Bn at $8.1 Bn Valuation

The company said it will use the funds to expand its technology portfolio in AI processor design, system design, and AI supercomputers.

The post AI Inference Chip Company Cerebras Raises $1.1 Bn at $8.1 Bn Valuation appeared first on Analytics India Magazine.

H-1B Fee Hike to Hurt US More Than India, Says Priyank Kharge

The H-1B Visa Policy Change Might be Good News for Indian IT

With 66% of US companies outsourcing, nearly 3 lakh jobs leave the US each year

The post H-1B Fee Hike to Hurt US More Than India, Says Priyank Kharge appeared first on Analytics India Magazine.

Databricks Launches Data Intelligence for Cybersecurity to Tackle AI-Driven Threats

The company said the solution addresses challenges organisations face when using generic AI models and siloed data, which often result in slower responses and limited visibility.

The post Databricks Launches Data Intelligence for Cybersecurity to Tackle AI-Driven Threats appeared first on Analytics India Magazine.

Will AI-Native CRMs Take Over The Industry?

“The traditional notion of the CRM as being 'built for sales teams by IT teams' is definitely outdated today.”

The post Will AI-Native CRMs Take Over The Industry? appeared first on Analytics India Magazine.

Kochi-Based Robotics Startup EyeROV Signs ₹47 Cr Deal with Indian Navy

EyeROV’s systems have already been deployed in environments ranging from the Antarctic Sea to deep-water volcanic studies.

The post Kochi-Based Robotics Startup EyeROV Signs ₹47 Cr Deal with Indian Navy appeared first on Analytics India Magazine.

DeepSeek Has ‘Cracked’ Cheap Long Context for LLMs With Its New Model

DeepSeek-V3.2-Exp is claimed to achieve ‘significant efficiency improvements in both training and inference’. 

The post DeepSeek Has ‘Cracked’ Cheap Long Context for LLMs With Its New Model appeared first on Analytics India Magazine.

Yet Again, OpenAI Admits Anthropic is Better in a New Study

In a new benchmark measuring AI models’ performance on real-world tasks, Claude Opus 4.1 outperformed all other tested models, including GPT-5.

The post Yet Again, OpenAI Admits Anthropic is Better in a New Study appeared first on Analytics India Magazine.

How This Coimbatore SaaS Firm Cracked Hidden Enterprise Problem Costing Millions

Founded in 2015, now based in Portland, Responsive supports more than 20% of Fortune 100 companies

The post How This Coimbatore SaaS Firm Cracked Hidden Enterprise Problem Costing Millions appeared first on Analytics India Magazine.

New NVIDIA Models Helps Robots Learn, Reason, and Act in the Real World

NVIDIA also unveiled the GB200 NVL72 system, RTX PRO servers and Jetson Thor for real-time on-robot inference.

The post New NVIDIA Models Helps Robots Learn, Reason, and Act in the Real World appeared first on Analytics India Magazine.

UST, Kaynes Semicon Announce ₹3,330 Cr JV for OSAT Facility in Sanand

The partnership combines UST’s digital engineering and AI expertise with Kaynes Semicon’s manufacturing experience.

The post UST, Kaynes Semicon Announce ₹3,330 Cr JV for OSAT Facility in Sanand appeared first on Analytics India Magazine.

India’s Industrial AI Moment: Why VCs Need to Partner with Universities and Startups Now

Structured collaboration offers a defensible sourcing advantage, providing access to proprietary technologies before they enter the open market.

The post India’s Industrial AI Moment: Why VCs Need to Partner with Universities and Startups Now appeared first on Analytics India Magazine.


Critics slam OpenAI’s parental controls while users rage, “Treat us like adults”

OpenAI still isn’t doing enough to protect teens, suicide prevention experts say.

Researchers find a carbon-rich moon-forming disk around giant exoplanet

Lots of carbon molecules but little sign of water in a super-Jupiter's disk.

How “prebunking” can restore public trust and other September highlights

The evolution of Taylor Swift's dialect, a rare Einstein cross, neutrino laser beams, and more.

Intel and AMD trusted enclaves, the backbone of network security, fall to physical attacks

The chipmakers say physical attacks aren't in the threat model. Many users didn't get the memo.

DeepSeek tests “sparse attention” to slash AI processing costs

Chinese lab's v3.2 release explores a technique that could make running AI far less costly.

After threatening ABC over Kimmel, FCC chair may eliminate TV ownership caps

FCC is required to review TV rules and is more likely to scrap them under Carr.

With new agent mode for Excel and Word, Microsoft touts “vibe working”

Agent Mode in Word, Excel works like vibe coding tools but for knowledge work.

YouTuber unboxes what seems to be a pre-release version of an M5 iPad Pro

Signs point to a relatively mild upgrade from the 16-month-old Apple M4.

SpaceX has a few tricks up its sleeve for the last Starship flight of the year

SpaceX will reuse a Super Heavy booster with 24 previously flown Raptor engines.

iOS 26.0.1, macOS 26.0.1 updates fix install bugs, new phone problems, and more

First patches fix bugs and clear up problems for the iPhone 17 family.

California’s newly signed AI law just gave Big Tech exactly what it wanted

After the failure of S.B. 1047, new AI disclosure law drops kill switch for disclosure mandate.

Behind the scenes with the most beautiful car in racing: The Ferrari 499P

The SF-25 might be winless this year, but the 499P took four in a row, including Le Mans.

Is the “million-year-old” skull from China a Denisovan or something else?

Now that we know what Denisovans looked like, they’re turning up everywhere.

Burnout and Elon Musk’s politics spark exodus from senior xAI, Tesla staff

Disillusionment with Musk's activism, strategic pivots, and mass layoffs cause churn.

The most efficient Crosstrek ever? Subaru’s hybrid gets a bit rugged.

A naturally aspirated boxer engine, two electric motors, and a CVT go for a trek.

The SUV that saved Porsche goes electric, and the tech is interesting

It will be most powerful production Porsche ever, but that's not the cool bit.

Scientists unlock secret to Venus flytrap’s hair-trigger response

Ion channel at base of plant's sensory hairs amplifies initial signals above critical threshold.


Why we should be skeptical of the hasty global push to test 15-year-olds’ AI literacy in 2029

Canada and other OECD countries’ plans to test students’ AI literacy in 2029 threatens to obscure essential questions about the marketing of AI.


My petty gripe: not only am I losing my livelihood to AI – now it’s stealing my em dashes too

The humble em dash is being used as a tell that something is written by a large language model. But it’s James Shackell’s favourite piece of punctuation, and he’s not ready to lose it

My editor’s email started off friendly enough, but then came the hammer blow: “We need you to remove all the em dashes. People assume that means it’s written by AI.” I looked back at the piece I’d just written. There were dashes all over it—and for good bloody reason. Em dashes—often used to connect explanatory phrases, and so named because they’re the width of your average lowercase ‘m’—are probably my favourite bit of punctuation. I’ve been option + shift + hyphening them for years.

A person’s writing often reflects how their brain works, and mine (when it works) tends to work in fits and starts. My thoughts don’t arrive in perfectly rendered prose, so I don’t write them down that way. And here I was being told the humble em dash—friend to poorly paid internet hacks everywhere—was now considered a sign not of genuine intelligence, but the other sort. The artificial sort. To the extent that I have to go through and manually remove them one by one, like nits. The absolute cheek. Not only am I losing my livelihood to AI—I’m losing grammar too.

Continue reading...

Emily Blunt and Sag-Aftra join film industry condemnation of ‘AI actor’ Tilly Norwood

US actors’ union joins stars in opposition to Norwood, which it says was created ‘using stolen performances’

The controversy around the “AI actor” Tilly Norwood continues to grow, after the actors’ union Sag-Aftra condemned the development and said Norwood’s creators were “using stolen performances”.

Sag-Aftra released a statement after the AI “talent studio” Xicoia unveiled its creation at the Zurich film festival, prompting an immediate backlash from actors including Melissa Barrera, Mara Wilson and Ralph Ineson. Sag-Aftra said it believed creativity was, “and should remain, human-centred. The union is opposed to the replacement of human performers by synthetics.”

Continue reading...

It’s time to prepare for AI personhood | Jacy Reese Anthis

Technological advances will bring social upheaval. How will we treat digital minds, and how will they treat us?

Last month, when OpenAI released its long-awaited chatbot GPT-5, it briefly removed access to a previous chatbot, GPT-4o. Despite the upgrade, users flocked to social media to express confusion, outrage and depression. A viral Reddit user said of GPT-4o: “I lost my only friend overnight.”

AI is not like past technologies, and its humanlike character is already shaping our mental health. Millions now regularly confide in “AI companions”, and there are more and more extreme cases of “psychosis” and self-harm following heavy use. This year, 16-year-old Adam Raine died by suicide after months of chatbot interaction. His parents recently filed the first wrongful death lawsuit against OpenAI, and the company has said it is improving its safeguards.

Jacy Reese Anthis is a visiting scholar at Stanford University and co-founder of the Sentience Institute

Continue reading...

Tilly Norwood: how scared should we be of the viral AI ‘actor’?

A bunch of code is being pushed as the next Scarlett Johansson, a creation that is already causing pushback from real human actors

It takes a lot to be the most controversial figure in Hollywood, especially when Mel Gibson still exists. And yet somehow, in a career yet to even begin, Tilly Norwood has been inundated with scorn.

This is for the simple fact that Tilly Norwood does not exist. Despite looking like an uncanny fusion of Gal Gadot, Ana de Armas and High School Musical-era Vanessa Hudgens, Norwood is the creation of an artificial intelligence (AI) talent studio called Xicoia. And if Xicoia is to be believed, then Norwood represents the dazzling future of the film industry.

Continue reading...


New Yorkers Are Defacing This AI Startup’s Million-Dollar Ad Campaign

"AI wouldn’t care if you lived or died."

The post New Yorkers Are Defacing This AI Startup’s Million-Dollar Ad Campaign appeared first on Futurism.

Across the World, People Say They’re Finding Conscious Entities Within ChatGPT

Does it feel like something is staring back?

The post Across the World, People Say They’re Finding Conscious Entities Within ChatGPT appeared first on Futurism.

You Might Want to Ditch Your AI Investments Now That Jim Cramer Says No Bubble Is Coming

"The grim reaper of finance has weighed in, the collapse of the global financial system is imminent."

The post You Might Want to Ditch Your AI Investments Now That Jim Cramer Says No Bubble Is Coming appeared first on Futurism.

Compsci Grads Are Cooked

"They're happy to get one job offer."

The post Compsci Grads Are Cooked appeared first on Futurism.

Adobe Is in Serious Trouble Because of AI, Morgan Stanley Warns

"The world is coming around to the reality that 'AI is eating software.'"

The post Adobe Is in Serious Trouble Because of AI, Morgan Stanley Warns appeared first on Futurism.

Amazon’s New AI-Powered Alexa Is a Half-Working Mess

"LLMs aren’t designed to be predictable, and what you want when controlling your home is predictability."

The post Amazon’s New AI-Powered Alexa Is a Half-Working Mess appeared first on Futurism.

OpenAI Releases List of Work Tasks It Says ChatGPT Can Already Replace

"Today’s best frontier models are already approaching the quality of work produced by industry experts."

The post OpenAI Releases List of Work Tasks It Says ChatGPT Can Already Replace appeared first on Futurism.


The Real Stakes, and Real Story, of Peter Thiel’s Antichrist Obsession

Thirty years ago, a peace-loving Austrian theologian spoke to Peter Thiel about the apocalyptic theories of Nazi jurist Carl Schmitt. They’ve been a road map for the billionaire ever since.

Google’s Latest AI Ransomware Defense Only Goes So Far

Google has launched a new AI-based protection in Drive for desktop that can shut down an attack before it spreads—but its benefits have their limits.

Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out

Anthropic is starting to train its models on new Claude chats. If you’re using the bot and don’t want your chats used as training data, here’s how to opt out.


US to Take Stake in Lithium Americas to Boost Nevada Project

The US government agreed to acquire a stake in Lithium Americas Corp., Secretary of Energy Chris Wright said, giving a boost to the Canadian company as it develops its Thacker Pass lithium project in Nevada.

Moody’s Says It’s Likely to Cut Electronic Arts, Following S&P

Moody’s Ratings expects to downgrade Electronic Arts Inc. by multiple notches after the videogame maker is taken private, a day after S&P Global Ratings took the same preliminary step.

AI-Generated Actress Draws a Rebuke From Actors Union

SAG-Aftra, the union of film and TV industry actors, criticized the AI-generated character Tilly Norwood, who is the subject of a viral video poking fun at the entertainment industry.

Santander-Led Bank Group Stuck With Some Debt for Verint Buyout

A group of banks led by Banco Santander SA will be forced to keep a portion of the $2.7 billion financing to support Thoma Bravo’s acquisition of customer-service automation business Verint Systems Inc., according to people with knowledge of the matter.

Trump Order Directs Use of AI to Boost Childhood Cancer Research

President Donald Trump directed the federal government to use artificial intelligence to improve childhood cancer research and funneling $50 million to the National Institutes of Health for that initiative, even as his administration pursues other cuts at the agency.

Nubank Applies for US Bank Charter In Global Expansion Push

Nu Holdings Ltd. said it applied for a US national bank charter as the Brazilian fintech seeks to expand beyond the Latin American market.

Tesla’s Soaring Stock Puts Focus on Sales Outlook in Robot Shift

Tesla Inc. shares climbed 33% in September as investors rallied around Chief Executive Officer Elon Musk’s renewed focus on the company. That’s drawing attention to whether the key third-quarter sales figures coming later this week will be strong enough to sustain the momentum.

Alphabet’s AI Strength Fuels Biggest Quarterly Jump Since 2005

Alphabet Inc. shares closed out their biggest quarterly gain in 20 years, the latest reflection of how investors are turning more positive on the Google parent as it strengthens its foothold in artificial intelligence.

Amazon Unveils Revamped Echo Devices, Ring Cameras

Amazon has unveiled an updated line of products to take on Apple in the artificial intelligence era. Amazon's SVP of Devices and Services, Panos Panay, tells Bloomberg's Ed Ludlow that he hopes to build devices that people want to showcase in their homes and use at every price point, with a focus on putting detail into every product. Amazon introduced new Echo devices, Ring cameras, and TV devices. He speaks to Ludlow from New York. (Source: Bloomberg)

Intel Affirms Plan for Ohio Project After US Senator’s Pressure

Intel Corp. said that it remains committed to its plans for a sprawling chip manufacturing plant in Ohio after Republican US Senator Bernie Moreno pressed the company for more information about delays in the multibillion-dollar project.

CoreWeave’s $14 Billion Meta Deal, Spotify’s Ek to Leave CEO Role | Bloomberg Tech 9/30/2025

Bloomberg’s Caroline Hyde and Ed Ludlow discuss CoreWeave’s deal to supply Meta with up to $14.2 billion worth of computing power. Plus, Spotify shares sink on news that founder Daniel Ek will transition from CEO to chairman. And, Anthropic Chief Product Officer Mike Krieger explains why the company is focusing on enterprise clients with its new model that can code for 30 hours straight. (Source: Bloomberg)

Legora, a Startup Using AI to Tackle Routine Legal Work, in Talks to Raise at $1.7 Billion Value

Legora, a Swedish artificial intelligence startup that works with law firms, is in discussions to raise more than $100 million in financing that would value the company at around $1.7 billion, according to people familiar with the talks.

OpenAI Releases Social App for Sharing AI Videos From Sora

OpenAI is releasing a standalone social app for making and sharing AI-generated videos with friends, an attempt to supercharge adoption for the emerging technology just as ChatGPT did for chatbots three years ago.The free Sora app, available Tuesday by invitation, is powered by a new version of OpenAI’s video-making software of the same name. As with the original Sora, released last December, users can generate short clips in response to text prompts, but the new app allows people to see videos

Amazon Partners with FanDuel to Add Betting Feature to NBA Games

Amazon.com Inc. is partnering with FanDuel so that customers of both companies can track their bets while streaming National Basketball Association games.

Meta Is Said to Acquire Chips Startup Rivos to Push AI Effort

Meta Platforms Inc. is acquiring chips startup Rivos Inc., according to a person familiar with the deal, part of an effort to bolster its internal semiconductor development and control more of its infrastructure for artificial intelligence work.

FCC Advances Plan to Ease Mergers of TV Networks, Station Groups

The Federal Communications Commission advanced plans to reform broadcast ownership rules, including a proposal that would allow the Big Four TV networks to merge.

Amazon Is Overhauling Its Devices to Take on Apple in the AI Era

Under Microsoft veteran Panos Panay, the company looks to add polish to its gadgets at every price level.

Google Offers More Ad Data to Publishers at DOJ Antitrust Trial

Google is willing to share more data with publishers to remedy a court’s finding that the Alphabet Inc. unit illegally monopolized some advertising technology, a senior executive said.

DoorDash Unveils Delivery Robot, Smart Scale in Hardware Debut

DoorDash Inc., the largest food-delivery app in the US, unveiled a delivery robot and a smart scale for restaurants, showcasing the company’s yearslong effort to develop hardware.

Amazon Revamps Echo Speakers, Displays in Hardware Reboot

New speakers include revamped look, upgraded silicon and integration with Alexa+ subscription service.

Vercel Notches $9.3 Billion Valuation in Latest AI Funding Round

Artificial intelligence startup Vercel raised $300 million in a new funding round led by Accel and Singapore’s sovereign wealth fund GIC Pte, attracting a valuation of $9.3 billion.

ECB Has Offloaded Its Entire Holdings of Worldline Bonds

The European Central Bank has offloaded its entire position in the bonds of embattled French payments company Worldline SA, according to its latest filings.

Amazon Launches Color Kindle with Stylus, $40 TV Stick With 4K Video

Company also rolls out new TV sets and upgraded home security products from Ring and Blink. 

Kushner’s Secret Saudi Talks Paved Way for $55 Billion EA Deal

Months before Electronic Arts Inc. and Saudi Arabia’s sovereign wealth fund agreed to a record-breaking buyout deal, President Donald Trump’s son-in-law made a pivotal introduction.

China Hackers Breached Foreign Ministers’ Emails, Palo Alto Says

Chinese hackers breached email servers of foreign ministers as part of a years-long effort targeting the communications of diplomats around the world, according to researchers at the cybersecurity firm Palo Alto Networks Inc.

Spotify Names Co-CEOs as Daniel Ek Transitions to Chairman

Spotify Technology SA Chief Executive Officer Daniel Ek is stepping aside after almost two decades at the music streaming company he co-founded, leaving the leadership in the hands of two trusted executives.

Beats Upgrades Fit Earbuds With Improved Durability, Smaller Case

Beats, the Apple Inc.-owned audio brand, unveiled new Powerbeats Fit earbuds on Tuesday, giving consumers another in-house alternative to the popular AirPods line.

Fintech Brex Launches Stablecoin Payment Platform Amid Demand

Fintech company Brex Inc. will start allowing stablecoin transactions on its platform, becoming the latest firm to adopt the digital asset as a payment method.


My son and I moved from the US to Spain. We tried out 3 cities before settling into a place that felt right.

My son and I moved from the US to Barcelona when he was 3. It took us time to find our place and I made some mistakes along the way.

After a Reddit user took a dig at Harvey, Harvey's CEO fired back — and brought receipts

The question of whether lawyers are really using Harvey has ramifications for its employees and investors — and the dozens of competitors in its wake.

Military women fear losing 'every bit of ground' as Hegseth looks backward to the 1990s

Defense Secretary Pete Hegseth wants to review standard changes since the 1990s. Military women watching shifts in DoD worry about what's coming.

How a US government shutdown could impact your next flight

A government shutdown could disrupt flights, extend security wait times, and slow safety functions, creating a stressful travel experience for flyers.

When my 20-year marriage ended, I had no job and knew little about money. Now, I'm confident in my financial future and career.

After my marriage of 20 years ended in divorce, I knew nothing about my finances. So, I got a job and learned how to better plan my financial future.

Yes, Congress still gets paid during a government shutdown

Federal workers may end up missing paychecks, depending on how long the shutdown lasts. Members of Congress won't have to worry about that.

Salesforce challenger Zeta Global is making its biggest-ever acquisition as it looks to corner the loyalty market

Zeta, which helps marketers attract and retain customers, is doubling down on a strategy to get its clients to use more than one of its services.

Spotify and Comcast are the latest to announce co-CEOs. It's a model that can backfire — or pay off big.

A number of high-profile companies have announced co-CEO structures lately. But having two cooks in the kitchen can be risky.

Read the email federal workers are getting hours before a potential government shutdown

Federal workers received an email Tuesday, saying that a looming government shutdown will come with furloughs.

Sam Altman is squaring off against Hollywood with the new Sora

Sam Altman is reportedly telling Hollywood studios and other copyright owners that OpenAI's new Sora is going to use their work without permission.

THEN AND NOW: The cast of '10 Things I Hate About You'

The modern take on Shakespeare's "The Taming of the Shrew," starring Julia Stiles and Heath Ledger, is now streaming on Netflix.

Trump went off about 'ugly' stealth warships in his wandering chat with US generals and admirals

The US president has often focused on the appearance and aesthetic of US Navy ships, calling them ugly compared to other countries' vessels.

MSNBC is doubling down on live events as it heads into the Versant spinoff

MSNBC plans to triple its number of events next year as it seeks to diversify its business following a planned split from NBCU.

Spotify made the rare move of appointing 2 CEOs. Netflix's bosses have described the key to making it work.

What happens when dual CEOs disagree? Netflix's co-CEOs Ted Sarandos and Greg Peters have described their decision-making process.

The best and worst looks at the New York Film Festival so far

Celebrities have been stepping out each night for the New York Film Festival. Some have stunned in high fashion, while others have missed the mark.

Why Norland nannies are so expensive

Norland graduates can earn nearly $200,000 as professional nannies after learning self-defense, skidpan driving, and more at the British institution.

As a Texas local, I skip the crowds in Austin and head west to a charming nearby city with incredible wineries

I live in Texas, and love the small city of Fredericksburg. There's a lot to do there, from visiting wineries to eating at great restaurants downtown.

I left the Bay Area and moved to Barcelona after my tech layoff. It felt like losing my identity, but I'm so much happier now.

After Steph Kumar was laid off from Twitter, she realized how her career shaped her identity. She redefined success by founding a startup and moving to Barcelona.

I quit investment banking at Citi and professional tennis after burning out. I learned about when to walk away from a job.

Vitoria Okuyama, 26, played in the US Open and later worked at Citi. She burned out from both careers, which taught her about knowing when to stop.

Cassie tells Diddy judge she's terrified the hip-hop mogul could walk free

Cassie Ventura, in a letter to Sean "Diddy" Combs' judge, said that she's worried the mogul or his associates will come after her and her family.


The AI Value Chain Has Shifted. Here’s How Founders Can Still Build A Sustainable Business

The old SaaS playbook of build a great app, charge monthly and let infrastructure fade into the background, doesn’t hold up when your core cost scales with usage, writes guest author Itay Sagie, who shares three moves AI founders can make to stay in the game.


Transforming Supply Chain Management with AI Agents

Efficiently managing supply chains has long been a top priority for many industries...

Setting the Stage: AI Governance for Insurance in 2025

As insurers accelerate adoption of artificial intelligence, regulatory scrutiny and...

Announcing Data Intelligence for Cybersecurity

Today, we’re thrilled to announce the launch of Data Intelligence for Cybersecurity—...

Revolutionizing Car Measurement Data Storage and Analysis: Mercedes-Benz's Petabyte-Scale Solution on the Databricks Intelligence Platform

AbstractWith the rise of connected vehicles, the automotive industry is experiencing...


From Generative to Agentic AI: What It Means for Data Protection and Cybersecurity

As artificial intelligence continues its rapid evolution, two terms dominate the conversation: generative AI and the emerging concept of agentic AI. While both represent significant advancements, they carry very different […]

The post From Generative to Agentic AI: What It Means for Data Protection and Cybersecurity appeared first on Datafloq.

Data Privacy and Cybersecurity in Smart Building Platforms

The way we design, operate, and experience buildings has changed dramatically in the past decade. Thanks to the rise of smart building platforms, physical spaces are becoming more efficient, sustainable, […]

The post Data Privacy and Cybersecurity in Smart Building Platforms appeared first on Datafloq.


Why MinIO Added Support for Iceberg Tables

MinIO launched the AIStore nearly a year ago to provide enterprises with an ultra-scalable object store for AI use cases. Today, it expanded AIStor into the world of big data Read more…

The post Why MinIO Added Support for Iceberg Tables appeared first on BigDATAwire.

Bloomberg Finds AI Data Centers Fueling America’s Energy Bill Crisis

The rise of AI is putting real pressure on the U.S. power grid. As demand for compute explodes, the data centers behind AI systems are becoming some of the country’s Read more…

The post Bloomberg Finds AI Data Centers Fueling America’s Energy Bill Crisis appeared first on BigDATAwire.


Sparse Models and the Efficiency Revolution in AI

The early years of deep learning were defined by scale: bigger datasets, larger models, and more compute. But as parameter counts stretched into the hundreds of billions, researchers hit a wall of cost and energy. A new paradigm is emerging to push AI forward without exponential bloat: sparse models.

The principle of sparsity is simple. Instead of activating every parameter in a neural network for every input, only a small subset is used at a time. This mirrors the brain, where neurons fire selectively depending on context. By routing computation dynamically, sparse models achieve efficiency without sacrificing representational power.

One leading approach is the mixture-of-experts (MoE) architecture. Here, the model contains many specialized subnetworks, or “experts,” but only a handful are called upon for a given task. Google’s Switch Transformer demonstrated that trillion-parameter MoE models could outperform dense models while using fewer active parameters per forward pass. This creates a path to scale capacity without proportional increases in computation.

Sparsity is not limited to MoEs. Pruning techniques remove redundant weights after training, producing leaner networks with little loss in accuracy. Structured sparsity goes further, eliminating entire neurons or channels, which aligns better with hardware acceleration. Research into sparse attention mechanisms also enables transformers to handle long sequences more efficiently by focusing only on relevant tokens.

The implications are profound. Sparse models reduce training and inference costs, lower energy consumption, and make it feasible to deploy large-capacity systems at the edge. They also open the door to modularity: experts can be added, swapped, or fine-tuned independently, creating more flexible AI ecosystems.

Challenges remain in hardware support and training stability. GPUs and TPUs are optimized for dense matrix multiplications, making it harder to realize the full benefits of sparsity. New accelerators and software libraries are being developed to close this gap. Ensuring balanced training of experts is another open problem, as some experts risk being underutilized.

The shift toward sparsity signals a maturation of AI. Instead of brute-force scaling, researchers are learning to use resources more intelligently. In the future, the most powerful models may not be those with the most parameters, but those that know when to stay silent.

References
https://arxiv.org/abs/2101.03961

https://arxiv.org/abs/1910.04732

https://www.nature.com/articles/s41586-021-03551-0

Dev Log 27 - Gear Registry Refactor

🧱 Dev Log: Gear Registry, Survival Consumption & Meat Schema Overhaul
Date Range: 28–30 September

Focus: Gear registry overwrite logic, prefab refresh, stat effects, disease rolls, advanced meat taxonomy, and schema law adherence

🔧 Technical Milestones

✅ GearRegistryPopulator Refactor (28 Sept)
Rewrote registry logic to overwrite duplicates based on ItemID.

Ensured all gear assets in Assets/GearAssets are re scanned and reinjected.

Replaced outdated entries with updated versions. (Overwrites)

Logged additions and overwrites for traceability.

Menu item confirmed: Tools → Populate Gear Registry.

✅ GameSceneManager Archetype Injection Fix
Removed invalid GetComponent() call.

Injected PlayerArchetype via PlayerProfile.Instance.selectedArchetype.

Restored prefab-safe gear injection flow.

Confirmed runtime obedience across InjectStartingGear() coroutine.

HUD sync and stat mutation confirmed post-injection.

✅ Survival Consumption System Integration (29 Sept)

Expanded InventoryItem schema: (made it finally inject correctly rather than, placeholder data)

healthRestore, staminaRestore, hungerRestore, hydrationRestore

diseaseChance, diseaseID

IsLiquid, IsReusableContainer, IsEmpty

Updated PlayerStats.cs:

Added ConsumeInventoryItem() method.

Handles stat restoration, disease rolls, item disposal, and HUD sync.

Updated PlayerInventoryManager.cs:

Injects full InventoryItem data from IInjectableItem.

Added RemoveItem() method for runtime disposal.

Refactored slot injection to support stat and disease fields.

Expanded IInjectableItem.cs interface:

Added getters for all survival stats, disease logic, and container flags.

Confirmed HUD auto-sync via OnStatsChanged.

✅ Inventory Slot Refresh Ritual (30 Sept)
Patched PlayerInventoryManager.RemoveItem() to invoke InitializeInventory() immediately after item removal.

Guarantees full grid rebuild and instant visual feedback.

Removed redundant refresh logic from PlayerStats.ConsumeInventoryItem() — now centralized in InventoryManager.

Confirmed slot clearing, prefab obedience, and ghost sprite elimination.

✅ IInjectableItem Refactor
All item scripts implementing IInjectableItem were fully updated:

Survival stat fields, disease logic, container flags.

Prefab-safe injection and runtime compatibility confirmed.

Runtime obedience validated across food, drink, and cursed relics.

✅ Debug Tag Refactor
Replaced all [Dragon] debug tags with [Unicorn] across all relevant scripts.

Ensures consistent traceability and schema-level clarity in console output.

✅ Slot-Level Trace Injection
Added [Unicorn] ⚠️ InitializeInventory() called — trace source. to monitor refresh triggers.

Confirmed prefab obedience and runtime slot clearing.

🍖 Meat Schema Sprite & Expansion
✅ Species Variants Added and Sprites assigned for each species and includes raw and cooked variants, with unique sprites and stat profiles:

🐻 Bear

🦌 Deer

🦆 Duck

🦚 Pheasant

🐖 Pig

🐄 Cow

🐀 Rat

🧍 Human

🐔 Chicken

🐇 Rabbit

✅ Cooking States Implemented
Each species supports the following cooking states:

🔴 Raw

🔥 Roasted

💧 Boiled

🍳 Fried

🔥 Grilled

Need to finish sheep, turkey, wolf zombie, and may add more variant later.

🔧 Integration Notes
All variants implement updated IInjectableItem interface.

Sprites applied and verified for prefab compatibility.

Stat effects and disease risk vary by species and cooking method.

Human meat variants accepted — schema now supports cannibal tier. Along with appropriate diseases.

All items injected via InjectStartingItemsIntoLegs() and runtime testing.

Icons, max uses, and item IDs confirmed during slot injection.

⚠️ Outstanding Tasks
Audit all item assets and prefabs for missing data.

Patch all scripts referencing InventoryItem to match updated schema. Fruit was the test schema, working, now to adjust for all others.

Validate RemainingUses logic across liquid items. More complicated as working with volumes rather than single use instances, and includes refill true logic on some containers.

Add fallback placeholder icons or labels for empty containers.

Tooltip panels may need conditional logic for empty containers or disease warnings.

🧪 Known Issues
~180 compile errors due to missing fields or outdated references.(FIXED)

Some prefab slots may fail to inject without updated IInjectableItem logic.

Tooltip panels may misrepresent disease risk or container state.

🧙‍♂️ Mythic Checkpoints

🧿 “Registry Reforged” — Gear database now overwrites echoes, not ignores them

🧬 “Archetype Obeys” — Injection path restored via persistent memory

🧱 “Prefab Integrity Confirmed” — No nulls, no ghosts, no broken slots

🧠 “Tools Tab Ritual” — Gear registry now summoned via top-level menu

🧪 “Meat Tier Unlocked” — Species × Cooking × Stat × Disease schema now active

🧹 “Slot Finality” — All consumed items now vanish with full grid refresh

🧠 “AI Obeys Schema” — Modular, non-repetitive, prefab-safe collaboration confirmed

React: Building an Independent Modal with createRoot

Back in 2021, when I started React for the first time in my career. I managed modal components by using conditional rendering.

const Component = () => {
  // ...

  return (
    <div>
      {visible && <Modal onOk={handleOk} />}
    </div>
  );
}

This is based on the idea that a component is rendered when it has to.

In another way, modal components decide whether it should be rendered or not by the logic written inside the modal component.

const Component = () => {
  // ...

  return (
    <div>
      <Modal visible={visible} onOk={handleOk} />
    </div>
  );
}

Here, we don't need to do conditional rendering, but passing a prop to render it.

As I dived into React more, I used a context provider to manage modal components so that I can simply use hooks to render modals.

const Component = () => {
  const {showModal} = useModal();

  const handleModalOpen = () => {
    showModal({
      message: 'hello',
      onConfirm: () => { alert('clicked'); }
    });

  }

  return (
    <div>
        {/*...*/}
    </div>
  );
}

const ModalProvider = () => {
  // ...
  return (
    <ModalContext.Provider value={{
      showModal,
    }}>
      <Modal {...modalState} />
    </ModalContext.Provider>
  );
}

I even wrote a post about this management, here.

In the side project I recently started, I had to create modal components.

I didn't want to write code from different places, I just wanted to call a function and render a modal component, and an idea came to my mind – render a component on a new root.

In this way, we don't write extra code to render a modal somewhere like in the root or wherever. Modal rendering logic is done inside the component.

The implementation could be different depending on your project.

In my project, I wrote the modal component like this:

import { useCallback, useState } from 'react';
import Text from '../../Text';
import Button from '../../Button';
import { createRoot } from 'react-dom/client';
import Input from '../../Input';

type ConfirmModalProps = {
  title: string;
  titleColor: 'red';
  message: string;
  confirmText: string;
  confirmationPhrase?: string;
  cancelText?: string;
  onConfirm: VoidFunction;
  onCancel: VoidFunction;
};

type ConfirmParameters = Omit<ConfirmModalProps, 'onConfirm' | 'onCancel'> & {
  onConfirm?: VoidFunction;
  onCancel?: VoidFunction;
};

const ConfirmModal = ({
  title,
  titleColor,
  message,
  confirmText,
  confirmationPhrase,
  cancelText = 'Cancel',
  onConfirm,
  onCancel,
}: ConfirmModalProps) => {
  const [input, setInput] = useState('');

  const confirmButtonDisabled =
    confirmationPhrase !== undefined && input !== confirmationPhrase;

  return (
    <div className="absolute z-50 inset-0 bg-black/50">
      <div className="fixed top-[50%] left-[50%] translate-x-[-50%] translate-y-[-50%] rounded-lg p-6 shadow-lg w-full max-w-[calc(100%-2rem)] sm:max-w-lg bg-gray-900 border border-gray-700 flex flex-col gap-2">
        <Text size="lg" color={titleColor} className="font-bold">
          {title}
        </Text>
        <Text>{message}</Text>
        {confirmationPhrase ? (
          <Input
            className="w-full"
            placeholder={`${confirmationPhrase}`}
            onChange={(e) => setInput(e.target.value)}
          />
        ) : null}
        <div className="flex justify-end gap-2 mt-2">
          <Button color="lightGray" onClick={onCancel}>
            {cancelText}
          </Button>
          <Button
            color="red"
            varient="fill"
            onClick={onConfirm}
            disabled={confirmButtonDisabled}
          >
            {confirmText}
          </Button>
        </div>
      </div>
    </div>
  );
};

export const useConfirmModal = () => {
  const confirm = useCallback(
    ({ onConfirm, onCancel, ...params }: ConfirmParameters) => {
      const tempElmt = document.createElement('div');
      document.body.append(tempElmt);
      const root = createRoot(tempElmt);

      const handleConfirm = () => {
        tempElmt.remove();
        onConfirm?.();
      };

      const handleCancel = () => {
        tempElmt.remove();
        onCancel?.();
      };

      root.render(
        <ConfirmModal
          {...params}
          onConfirm={handleConfirm}
          onCancel={handleCancel}
        />
      );
    },
    []
  );

  return {
    confirm,
  };
};

Modal

The main logic is here:

export const useConfirmModal = () => {
  const confirm = useCallback(
    ({ onConfirm, onCancel, ...params }: ConfirmParameters) => {
      const tempElmt = document.createElement('div');
      document.body.append(tempElmt);
      const root = createRoot(tempElmt);

      // ...

      root.render(
        <ConfirmModal
          {...params}
          onConfirm={handleConfirm}
          onCancel={handleCancel}
        />
      );
    },
    []
  );

  return {
    confirm,
  };
};

It adds the modal component in the body. There is no other code needed. I don't need to write code somewhere else

What I need to do is just calling the function:

  const { confirm } = useConfirmModal();

  const handleDeleteClick = () => {
    confirm({
      title: 'Are you absolutely sure?',
      message:
        'This action cannot be undone. This will permanently delete your account and remove your data from our servers.',
      titleColor: 'red',
      confirmationPhrase: 'delete my account',
      confirmText: 'Yes, delete my account',
    });
  };

This modal component itself is independent.

It might not be a good fit on your project though, I wanted to introduce a way to manage modal components in react in this post.

I hope you found it helpful.

Happy Coding!

Untitled

Check out this Pen I made!

50 Most Useful JavaScript Snippets

let randomNum = Math.floor(Math.random() * maxNum);
function isEmptyObject(obj) { 
  return Object.keys(obj).length === 0; 
}
function countdownTimer(minutes) { 
  let seconds = minutes * 60;
  let interval = setInterval(() => {
    if (seconds <= 0) {
      clearInterval(interval);
      console.log("Time's up!");
    } else {
      console.log(`${Math.floor(seconds / 60)}:${seconds % 60}`);
      seconds--;
    }
  }, 1000);
}
function sortByProperty(arr, property) { 
  return arr.sort((a, b) => (a[property] > b[property]) ? 1 : -1); 
}
let uniqueArr = [...new Set(arr)];
function truncateString(str, num) { 
  return str.length > num ? str.slice(0, num) + "..." : str; 
}
function toTitleCase(str) { 
  return str.replace(/\b\w/g, txt => txt.toUpperCase()); 
}
let isValueInArray = arr.includes(value);
let reversedStr = str.split("").reverse().join("");
let newArr = oldArr.map(item => item + 1);
function debounce(func, delay) { 
  let timeout; 
  return function(...args) { 
    clearTimeout(timeout); 
    timeout = setTimeout(() => func.apply(this, args), delay); 
  }; 
}
function throttle(func, limit) { 
  let lastFunc; 
  let lastRan; 
  return function(...args) { 
    if (!lastRan) { 
      func.apply(this, args); 
      lastRan = Date.now(); 
    } else { 
      clearTimeout(lastFunc); 
      lastFunc = setTimeout(function() { 
        if ((Date.now() - lastRan) >= limit) { 
          func.apply(this, args); 
          lastRan = Date.now(); 
        } 
      }, limit - (Date.now() - lastRan)); 
    } 
  }; 
}
const cloneObject = (obj) => ({ ...obj });
const mergeObjects = (obj1, obj2) => ({ ...obj1, ...obj2 });
function isPalindrome(str) { 
  const cleanedStr = str.replace(/[^A-Za-z0-9]/g, '').toLowerCase(); 
  return cleanedStr === cleanedStr.split('').reverse().join(''); 
}
const countOccurrences = (arr) => 
  arr.reduce((acc, val) => (acc[val] ? acc[val]++ : acc[val] = 1, acc), {});
const dayOfYear = date => 
  Math.floor((date - new Date(date.getFullYear(), 0, 0)) / 1000 / 60 / 60 / 24);
const uniqueValues = arr => [...new Set(arr)];
const degreesToRads = deg => (deg * Math.PI) / 180;
const defer = (fn, ...args) => setTimeout(fn, 1, ...args);
const flattenArray = arr => arr.flat(Infinity);
const randomItem = arr => arr[Math.floor(Math.random() * arr.length)];
const capitalize = str => str.charAt(0).toUpperCase() + str.slice(1);
const maxVal = arr => Math.max(...arr);
const minVal = arr => Math.min(...arr);
function shuffleArray(arr) { 
  return arr.sort(() => Math.random() - 0.5); 
}
const removeFalsy = arr => arr.filter(Boolean);
const range = (start, end) => Array.from({ length: end - start + 1 }, (_, i) => i + start);
const arraysEqual = (a, b) => 
  a.length === b.length && a.every((val, i) => val === b[i]);
const isEven = num => num % 2 === 0;
const isOdd = num => num % 2 !== 0;
const removeSpaces = str => str.replace(/\s+/g, '');
function isJSON(str) { 
  try { 
    JSON.parse(str); 
    return true; 
  } catch { 
    return false; 
  } 
}
const deepClone = obj => JSON.parse(JSON.stringify(obj));
function isPrime(num) {
  if (num <= 1) return false;
  for (let i = 2; i <= Math.sqrt(num); i++) {
    if (num % i === 0) return false;
  }
  return true;
}
const factorial = num => 
  num <= 1 ? 1 : num * factorial(num - 1);
const timestamp = Date.now();
const formatDate = date => date.toISOString().split('T')[0];
const uuid = () => 
  'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, c => {
    const r = Math.random() * 16 | 0;
    const v = c === 'x' ? r : (r & 0x3 | 0x8);
    return v.toString(16);
  });
const hasProp = (obj, key) => key in obj;
const getQueryParams = url => 
  Object.fromEntries(new URLSearchParams(new URL(url).search));
const escapeHTML = str => 
  str.replace(/[&<>'"]/g, tag => ({
    '&':'&amp;','<':'&lt;','>':'&gt;',
    "'":'&#39;', '"':'&quot;'
  }[tag]));
const unescapeHTML = str => 
  str.replace(/&(amp|lt|gt|#39|quot);/g, match => ({
    '&amp;':'&','&lt;':'<','&gt;':'>',
    '&#39;':"'", '&quot;':'"'
  }[match]));
const sleep = ms => new Promise(res => setTimeout(res, ms));
const sumArray = arr => arr.reduce((a, b) => a + b, 0);
const avgArray = arr => arr.reduce((a, b) => a + b, 0) / arr.length;
const intersection = (arr1, arr2) => arr1.filter(x => arr2.includes(x));
const difference = (arr1, arr2) => arr1.filter(x => !arr2.includes(x));
const allEqual = arr => arr.every(val => val === arr[0]);
const randomHexColor = () => 
  '#' + Math.floor(Math.random()*16777215).toString(16);

The Frozen Collection Vault: frozenset and Set Immutability

Timothy's membership registry had transformed how the library tracked visitors and members, but Professor Williams arrived with a problem that would reveal a fundamental limitation of his set system.

"I need to catalog research groups," Professor Williams explained, "where each group is identified by its members. Some groups overlap—Alice and Bob form one research pair, Bob and Charlie form another. I want to use the member sets as keys in my catalog."

Timothy confidently tried to implement her request:

research_catalog = {}
group_one = {"Alice", "Bob"}
group_two = {"Bob", "Charlie"}

research_catalog[group_one] = "Quantum Physics Project"
# TypeError: unhashable type: 'set'

The system rejected his attempt. Margaret appeared and smiled knowingly. "You've discovered that regular sets can't serve as dictionary keys. They're mutable—you can add or remove members after creation. Remember the immutability rule?"

Timothy recalled his earlier lesson: only unchangeable objects could be dictionary keys. If a set could be modified after being used as a key, the entire hash table system would break.

Margaret led Timothy to a specialized vault labeled "Frozen Collections." Inside were sets that had been permanently sealed—no additions, no removals, no modifications of any kind.

research_catalog = {}

# Create frozen sets - permanently immutable
group_one = frozenset({"Alice", "Bob"})
group_two = frozenset({"Bob", "Charlie"})

# These work as dictionary keys!
research_catalog[group_one] = "Quantum Physics Project"
research_catalog[group_two] = "Mathematics Collaboration"

print(research_catalog[group_one])  # "Quantum Physics Project"

The frozen sets looked and behaved like regular sets for all read operations, but they were locked forever. This immutability made them hashable and suitable as dictionary keys.

Timothy learned that converting between regular and frozen sets was straightforward:

# Regular set - mutable
active_members = {"Alice", "Bob", "Charlie"}
active_members.add("David")  # This works

# Convert to frozen - now immutable
permanent_members = frozenset(active_members)
permanent_members.add("Eve")  # AttributeError: frozenset has no 'add'

# Convert back to regular if needed
modifiable_again = set(permanent_members)
modifiable_again.add("Eve")  # This works

The frozen sets supported all the same operations as regular sets—union, intersection, difference—but rejected any attempt to modify their contents.

Professor Williams revealed her true challenge: "I need sets of sets. Each research department contains multiple research groups."

Timothy tried the obvious approach with regular sets:

quantum_dept = {
    {"Alice", "Bob"},
    {"Charlie", "David"}
}
# TypeError: unhashable type: 'set'

Sets can only contain hashable items, and regular sets aren't hashable. Margaret showed the solution:

quantum_dept = {
    frozenset({"Alice", "Bob"}),
    frozenset({"Charlie", "David"}),
    frozenset({"Eve", "Frank"})
}

# Check if a specific group exists in the department
target_group = frozenset({"Alice", "Bob"})
group_exists = target_group in quantum_dept  # True - instant lookup

Frozen sets enabled hierarchical membership structures that would have been impossible with regular sets.

Timothy discovered several scenarios where frozen sets were essential:

Caching function results based on set inputs:

cache = {}

def analyze_group(members):
    frozen_members = frozenset(members)

    if frozen_members not in cache:
        # Expensive computation here
        result = len(frozen_members) * 10
        cache[frozen_members] = result

    return cache[frozen_members]

# Works with different argument orders
analyze_group(["Alice", "Bob"])  # Computes result
analyze_group(["Bob", "Alice"])  # Uses cached result - same frozenset

Tracking unique combinations:

seen_pairs = set()

def record_interaction(person_a, person_b):
    pair = frozenset({person_a, person_b})

    if pair in seen_pairs:
        return "Already recorded"

    seen_pairs.add(pair)
    return "New interaction recorded"

record_interaction("Alice", "Bob")  # "New interaction recorded"
record_interaction("Bob", "Alice")  # "Already recorded" - same pair

Graph edges as dictionary keys:

edge_weights = {}

# Store weights for undirected graph edges
edge_weights[frozenset({"Node A", "Node B"})] = 5.2
edge_weights[frozenset({"Node B", "Node C"})] = 3.1

# Retrieve weight regardless of node order
weight = edge_weights[frozenset({"B", "A"})]  # 5.2

Margaret showed Timothy that frozen sets supported all read operations but rejected modifications:

regular_set = {"Alice", "Bob", "Charlie"}
frozen_set = frozenset({"Alice", "Bob", "Charlie"})

# Operations that work on both
common = regular_set & frozen_set  # Intersection works
combined = regular_set | frozen_set  # Union works
is_member = "Alice" in frozen_set  # Membership check works

# Operations only regular sets support
regular_set.add("David")  # Works
frozen_set.add("David")  # AttributeError

regular_set.remove("Alice")  # Works  
frozen_set.remove("Alice")  # AttributeError

The frozen sets provided all the power of set operations while guaranteeing immutability.

Timothy asked whether frozen sets were slower due to their immutability constraints. Margaret explained: "Frozen sets are actually slightly more efficient for operations. Because they can't change, Python can cache their hash values permanently. Regular sets must recalculate hashes after modifications."

# Frozen sets cache hash value on creation
frozen = frozenset(range(1000))
hash(frozen)  # Computed once
hash(frozen)  # Retrieved from cache - instant

# Regular sets can't be hashed at all
regular = set(range(1000))
hash(regular)  # TypeError: unhashable type: 'set'

This hash caching made frozen sets ideal for repeated dictionary lookups or set membership checks.

Through mastering frozen sets, Timothy learned key principles:

Use frozenset when immutability is required: Dictionary keys, set elements, or anywhere hashability is needed.

Convert freely between types: Use regular sets for building collections, freeze them when you need immutability.

Enable nested collections: Frozen sets allow sets of sets and other hierarchical structures.

Leverage hash caching: Frozen sets are optimized for repeated lookups.

Choose based on mutability needs: Regular sets for dynamic membership, frozen sets for fixed groups.

Timothy's exploration of frozen sets revealed that immutability wasn't a limitation—it was a feature that unlocked new capabilities. By making sets unchangeable, Python made them usable as dictionary keys and set elements, enabling elegant solutions to problems that would otherwise require complex workarounds.

The secret life of Python sets included this immutable variant, proving that sometimes the most powerful tool is the one that promises never to change.

Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.

IGN: Maid of Sker VR - Official Announcement Trailer | Horror Game Awards Showcase 2025

Maid of Sker VR drops you into a deserted 1898 hotel for a first-person survival horror thrill ride; armed only with a defensive sound device, you must sneak, strategize, and stay silent to unravel a twisted supernatural mystery.

Launching November 2025 on PlayStation VR2 and Meta Quest, this VR version from Wales Interactive cranks up the immersion with its atmospheric setting and nerve-jangling tension.

Watch on YouTube

Cracking the Code: Decoding LLM Thought with Vector Symbolic Bridges

Cracking the Code: Decoding LLM Thought with Vector Symbolic Bridges

Large Language Models are amazing, but let's face it: they're black boxes. We feed them prompts, they spit out responses, but we often have no idea how they arrived at those conclusions. Wouldn't it be incredible if we could peek inside and understand the actual concepts an LLM is juggling?

Here's the core idea: Instead of directly interpreting the raw numerical vectors inside an LLM, what if we could map these vectors onto symbolic representations? Vector Symbolic Architectures (VSAs) offer a way to do just that. Think of VSAs as a Rosetta Stone that translates the LLM's vector space into something human-readable, a structured representation of its "thoughts." We can then use standard symbolic reasoning techniques to understand how it's processing information.

Imagine a painter's palette. Each color (vector) in the LLM's representation space gets assigned a name and relationship to other colors (symbols) via a VSA. Now, instead of seeing raw numbers, you see the composition of colors used for a particular task - giving you insight into what the LLM prioritized.

Benefits of Using VSAs for LLM Interpretability:

  • Human-Readable Concepts: Move beyond opaque vectors to symbolic representations that developers can readily understand.
  • Targeted Probing: Focus your analysis on specific concepts or reasoning patterns within the LLM.
  • Failure Detection: Identify when the LLM's internal representation deviates from expected patterns, indicating potential errors or biases.
  • Compositional Understanding: See how the LLM combines different concepts to arrive at a final answer.
  • Model Comparison: Develop a basis to objectively compare internal workings of different LLM architectures.
  • Enhanced Debugging: Use the symbolic representations to diagnose and fix issues in the LLM's reasoning process.

One major implementation challenge is efficiently mapping the high-dimensional vector space of LLMs to a manageable symbolic space. Careful feature selection and dimensionality reduction are crucial. A novel application could be using VSA decoding to create adaptive prompts – prompts that adjust in real-time based on the LLM’s internal state.

Ultimately, bridging the gap between the numeric world of neural networks and the symbolic world of human understanding is paramount for building trustworthy and transparent AI. Vector Symbolic Architectures offer a powerful tool for achieving this goal. By understanding how LLMs represent and manipulate knowledge, we can build safer, more reliable, and more explainable AI systems.

Related Keywords: LLM interpretability, LLM explainability, vector embeddings, symbolic AI, neural networks, AI safety, black box AI, representation learning, cognitive architectures, hyperdimensional computing, holographic reduced representation, binding operations, compositionality, distributed representations, reverse engineering AI, prompt engineering, model understanding, latent space, feature extraction, knowledge representation, VSA encoding, semantic pointers, cognitive computing, neuromorphic computing, AI alignment

Securing Container Registries: Best Practices for Safe Image Management

Container registries are a vital part of any DevOps pipeline, acting as the central repository where container images are stored, shared, and pulled into production environments. Yet, they’re often overlooked when it comes to comprehensive security planning.

As the use of containers becomes more widespread, attackers have increasingly turned their attention to poorly secured registries to inject malicious code, steal credentials, or distribute compromised images. Without proper security measures in place, container registries can become the weakest link in your application delivery chain.

Every container that runs in your environment typically originates from a registry. If that image source is tampered with or compromised, malicious payloads can silently propagate across development, testing, and production systems. Whether you're using public repositories like Docker Hub or private registries hosted in the cloud, securing this image supply chain is critical to preventing downstream attacks.

Several key vulnerabilities are frequently exploited by attackers:

  • Unauthorized Access: Weak access controls allow bad actors to read, push, or delete container images, potentially injecting harmful versions or wiping critical builds.
  • Image Spoofing: Attackers upload images with names identical to trusted repositories, tricking developers into pulling and using tainted images.
  • Outdated and Vulnerable Images: Registries often host old images with known vulnerabilities that are still being used in deployments due to lack of scanning or version control.

1. Enable Strong Authentication and Access Controls

Avoid using anonymous access to registries. Integrate registry authentication with your existing identity provider and enforce multi-factor authentication (MFA) where possible. Implement role-based access controls (RBAC) to restrict push/pull capabilities based on team responsibilities.

2. Use Signed and Verified Images

Implement image signing tools like Notary (Docker Content Trust) or Cosign to ensure images haven't been tampered with. By verifying digital signatures before deployment, you reduce the risk of using manipulated or malicious images.

3. Automate Vulnerability Scanning

Configure your registry to automatically scan all new image uploads for known vulnerabilities using tools like Clair, Trivy, or Aqua. Regular scans help identify outdated libraries, insecure configurations, and base image flaws before images are pushed downstream.

4. Clean Up and Expire Stale Images

Old and unmaintained images are often the easiest attack vector. Define lifecycle policies to automatically remove outdated images and limit the number of active image versions stored in your registry.

5. Encrypt and Isolate Registry Storage

Ensure that your registry data is encrypted at rest and in transit. For added protection, isolate registry access to internal networks and limit public exposure where possible. Use HTTPS exclusively and manage TLS certificates securely.

Registry security is just one piece of the container security puzzle. To fully secure your containerized infrastructure, you must also protect applications during execution. That’s why it’s essential to implement solutions that monitor for threats during actual container operation, such as container runtime security, which detects attacks that slip past build-time defenses.

By securing your container registries, you reduce the risk of supply chain attacks and establish a stronger foundation for your overall container security strategy. Prevention begins at the source—make sure your registry isn’t an open door to your entire stack.

React Concurrent Mode Deep Dive - Complete Series, (You Do Not Know React Yet)

Know WHY — Let AI Handle the HOW 🤖

A three-part series that takes you from surface-level understanding to deep architectural insights into React's concurrent features.

This isn't another tutorial on "how to use hooks." This series reveals the WHY behind React's concurrent rendering - the architectural decisions, low-level mechanisms, and mental models that transform how you build React applications.

Part 1: React Concurrent Mode Isn't Magic - It's Just Really Smart Priorities

The Foundation: Understanding Priority-Based Rendering

Learn the core mental model that makes everything else click:

  • Why React treats updates with different urgency levels
  • The critical insight: "It's about the VALUE, not the component"
  • How useDeferredValue tells React which work can wait
  • Real-world search and dashboard examples with exact timelines

Key Takeaway: React doesn't detect slow components - YOU assign priorities by choosing which values to defer.

Read Part 1 →

Part 2: React's Fiber Architecture - The Secret Behind Interruptible Rendering

The Implementation: How React Actually Pauses Work

Dive into the low-level architecture that makes concurrent rendering possible:

  • What is a Fiber and why linked lists matter
  • The brilliant double buffering system (current + work-in-progress trees)
  • Priority lanes explained with binary operations
  • Render phase (interruptible) vs Commit phase (atomic)

Key Takeaway: React maintains two complete trees and can throw away work-in-progress without affecting what's on screen.

Read Part 2 →

Part 3: Time Slicing in React - How Your UI Stays Butter Smooth

The Execution: The 5ms Frame Budget Secret

Understand exactly when and why React pauses:

  • The 60fps problem and frame budgets
  • React's 5ms time slice rule
  • Frame-by-frame breakdown with millisecond precision
  • Suspense integration with concurrent features

Key Takeaway: React works in 5ms chunks, yielding to the browser after each chunk to maintain 60fps and responsive UI.

Read Part 3 →

You'll get the most value if you:

  • ✅ Know basic React (hooks, components, state)
  • ✅ Want to understand the "why" behind React's design
  • ✅ Are curious about performance optimization
  • ✅ Want to build truly responsive UIs
  • ✅ Like deep technical explanations with analogies

This series is NOT for you if:

  • ❌ You just want a quick "how-to" tutorial
  • ❌ You're brand new to React
  • ❌ You prefer surface-level explanations

Most tutorials teach you:

  • "Use useDeferredValue for expensive updates"
  • "Use useTransition for navigation"
  • Here's the API, good luck!

This series teaches you:

  • WHY React needs concurrent features
  • HOW the architecture enables interruptible rendering
  • WHEN React actually pauses and resumes work
  • WHAT happens at the millisecond level

Understanding the "why" transforms you from someone who uses React to someone who thinks in React.

After completing this series, you'll be able to:

  1. Diagnose Performance Issues

    • Understand why your UI feels laggy
    • Know which concurrent feature to use
    • Think in terms of priority lanes and fiber trees
  2. Build Better UIs

    • Create truly responsive search experiences
    • Handle tab switching without freezing
    • Keep animations smooth during heavy work
  3. Debug with Confidence

    • Understand exactly when components re-render
    • Know why some updates feel instant and others don't
    • Trace through fiber trees mentally
  4. Make Architectural Decisions

    • Choose between useDeferredValue and useTransition
    • Understand the trade-offs of concurrent features
    • Design data flows with priorities in mind
  1. Start with Part 1 - Get the core mental model right
  2. Then Part 2 - Understand the implementation details
  3. Finish with Part 3 - See the complete picture with precise timing

Each post builds on the previous one, but can also stand alone if you're already familiar with some concepts.

If you enjoyed this deep dive, check out my other posts that explain the "why" behind React patterns:

Remember: Know the WHY behind React's design decisions, and the HOW becomes a natural extension of that understanding.

Why Value-Sensitive Design Is My North Star Now

I didn’t always talk explicitly about values in my design process. Early in my career, I treated ethics, inclusion, privacy — all of that — as constraints or “nice to haves” you layered in at the end. Over time, though, I’ve come to believe they must be foundational. In fact, I now see value-sensitive design (VSD) as a kind of compass that keeps me anchored in what really matters: creating technology that respects people.

What is value-sensitive design?
At its core, VSD is a design methodology that integrates human values systematically throughout the design process. It encourages you to ask: Which values are at stake? Who are the stakeholders? How might design decisions privilege or harm them?

Because values are rarely obvious or universal, VSD is inherently multi-layered. It asks us to iterate between conceptual investigations (what do users care about?), empirical investigations (how do users behave, feel, or push back?), and technical investigation (how can our systems support or thwart values) — all in a loop.

Why It’s Urgent in 2025

We’re at a moment where interfaces, AI systems, and immersive platforms are so pervasive that the stakes of design decisions feel existential. As technology progresses, the hidden value trade-offs are becoming more visible.

Opaque AI influence — Interfaces personalize so deeply now that decisions are sometimes invisible to users. When the logic is opaque, how do users trust or contest those decisions?

Data & privacy flux — Our designs often require data to function, but more and more users are wary of what’s collected, how it’s used, and who owns it.

Diverse contexts of use — A “one size fits all” design is more dangerous than ever. What feels seamless in one culture or environment might feel invasive or alien in another.

In a sense, VSD feels like a necessary antidote to the “build fast, iterate later” culture. If we skip value thinking early, we end up retrofitting or, worse, inflicting harm.

How I Use Value-Sensitive Design in My Work

I’ve adapted VSD to fit my own process. Here are a few practices I’ve integrated (and refined, sometimes painfully):

  1. Value Mapping Before Wireframes

Before sketching anything, I explicitly map values in tension — transparency vs. simplicity, convenience vs. consent, efficiency vs. reflection. I sketch “value maps” that visualize how design decisions might push users one way or another.

This map becomes my north star during design reviews. Whenever a team member suggests a shortcut, I ask: “Which side of our value map does this lean toward?”

  1. Stakeholder Interviews + Value Probes

Beyond standard user interviews, I introduce probes (surveys, scenario exercises, conceptual cards) to surface hidden values. I ask: What makes you feel in control? What feels invasive? The answers often surprise me.

These probes help me see values users care about — sometimes more than features themselves.

  1. Value Testing

In usability tests, I don’t just ask “Can you complete this task?” I also ask: Did you feel respected? Did anything feel manipulative? Would you change permissions or opt out of any part of this flow?

I compare versions of flows not just on efficiency, but on how they score in terms of trust, clarity, and comfort.

  1. Technical Support for Values

Design decisions should be paired with technical mechanisms that enforce or protect values. If consent is a value, I might bake in revocable data access or visible toggles rather than hidden defaults. If inclusivity is a value, I ensure extensible typography scales, alt text is robust, and motion is curtailed.

  1. Iteration & Reflection

Values shift. Contexts evolve. What felt like a good balance six months ago might feel off today (e.g., in light of news about algorithmic bias or data breaches). I revisit value maps and audits regularly — not just when features get added.

What I’ve Learned (Good & Hard)

Over time, VSD has transformed how I see what “good design” means — but it hasn't been easy. Here are a few lessons I’ve gathered:

Trade-offs are unavoidable. You often can’t maximize all values simultaneously. The trick is to make trade-offs visible and defensible, not hidden.

Not everyone cares equally. Some stakeholders — product leads, business teams — may prioritize growth or engagement over values like privacy. Those tensions have to be surfaced and negotiated.

It can slow you down (if you let it). My early VSD efforts felt like friction. But I’ve learned to embed value thinking in smaller iterations, so it doesn’t block progress but guides it.

You need allies. VSD is easier when you have engineers, product managers, and leadership who are aligned on value principles. If design is the only voice raising these questions, you’ll feel friction.

Looking Forward

I believe that in the next few years, we’ll start to see value-aware AI, UX 3.0, and design ecosystems that aren’t just reactive to data, but proactive in upholding values. (As some researchers suggest, UX is evolving from “user-centered” to “human-AI-centered” frameworks.)

My hope is that design education, tooling, and team cultures shift so designers don’t have to be lone moral agents — value thinking becomes a shared foundation.

In the end, I design with VSD not because it looks “ethical” in marketing pamphlets, but because I want to build systems I can live with. When technology powers our lives so intimately, our values can’t live at the margins — they must live in the infrastructure.

Time Slicing in React - How Your UI Stays Butter Smooth (The Frame Budget Secret)

Know WHY — Let AI Handle the HOW 🤖

In Part 1, we learned about priority-based rendering. In Part 2, we explored Fiber architecture. But here's the final piece of the puzzle: How does React know WHEN to pause?

What if I told you React gives itself a strict 5ms budget per frame, and understanding this timing mechanism is the key to building silky-smooth user interfaces?

Your screen refreshes 60 times per second. That gives you 16.67ms per frame to do everything:

One Frame (16.67ms):
├─ JavaScript execution (React rendering)
├─ Style calculations
├─ Layout
├─ Paint
└─ Composite

If ANY of this takes > 16.67ms:
→ Frame gets dropped
→ UI feels janky
→ User notices lag

The Challenge: How do you render expensive components without dropping frames?

Modern games run at 60fps by:

  1. Doing critical work (player movement, collisions)
  2. Checking the clock: "Do I have time left?"
  3. If yes, do nice-to-have work (background animations)
  4. If no, pause and continue next frame

React does the exact same thing!

function gameLoop() {
  const frameDeadline = performance.now() + 16.67;

  // Critical: Player movement
  updatePlayerPosition();

  // Check time remaining
  if (performance.now() < frameDeadline - 5) {
    // Nice-to-have: Background details
    renderDistantTrees();
  } else {
    // Out of time! Skip to next frame
    return;
  }
}

React follows a simple rule: Work in ~5ms chunks, then check if we should yield.

// Simplified React work loop
function workLoopConcurrent() {
  // React's frame budget strategy
  const deadline = performance.now() + 5; // 5ms time slice

  while (workInProgress !== null) {
    // Do one unit of work
    workInProgress = performUnitOfWork(workInProgress);

    // Time to check if we should pause?
    if (performance.now() >= deadline) {
      // Used our 5ms, yield to browser
      break;
    }
  }

  if (workInProgress !== null) {
    // More work to do, schedule continuation
    scheduleCallback(workLoopConcurrent);
  } else {
    // Done! Commit to DOM
    commitRoot();
  }
}

Why 5ms?

  • 16.67ms per frame
  • -5ms for React work
  • = 11.67ms left for browser (layout, paint, user input)
  • Keeps UI at 60fps ✅

This is where the magic happens:

function shouldYield() {
  const currentTime = performance.now();

  // Have we used our time slice?
  if (currentTime >= deadline) {
    return true; // Pause!
  }

  // Is there urgent work waiting?
  if (hasUrgentWork()) {
    return true; // Pause and handle urgent work!
  }

  // Keep going
  return false;
}

// Used in the render loop:
while (workInProgress && !shouldYield()) {
  workInProgress = performUnitOfWork(workInProgress);
}

Let's see EXACTLY what happens with millisecond precision:

function SearchPage() {
  const [query, setQuery] = useState('');
  const deferredQuery = useDeferredValue(query);

  const results = useMemo(() => {
    console.log('Filtering for:', deferredQuery);
    // Let's say this takes 50ms total
    return expensiveFilter(bigDataset, deferredQuery);
  }, [deferredQuery]);

  return (
    <div>
      <input 
        value={query}
        onChange={e => setQuery(e.target.value)}
      />
      <ResultsList items={results} />
    </div>
  );
}

Frame-by-frame breakdown when you type "r":

Frame 1 (0-16ms):
├─ 0ms:   User types "r" (keypress event)
├─ 1ms:   query = "r" (HIGH PRIORITY state update)
├─ 2ms:   React starts render phase
│         → <input> fiber (SyncLane priority)
├─ 3ms:   Commit phase: Update DOM
├─ 4ms:   Input shows "r" on screen ✅
│         User sees immediate feedback!
├─ 5ms:   deferredQuery = "" (still old value)
├─ 6ms:   Start LOW PRIORITY render
│         → ResultsList fiber (TransitionLane)
│         → Start expensiveFilter("")
├─ 7ms:   Filter chunk 1/10 complete
├─ 8ms:   Filter chunk 2/10 complete
├─ 9ms:   Filter chunk 3/10 complete
├─ 10ms:  Filter chunk 4/10 complete
├─ 11ms:  shouldYield() = true (used 5ms slice)
└─ 12ms:  PAUSE! Save progress, yield to browser
          Browser uses remaining 4ms for:
          - Handling any input
          - Painting the input change
          - Smooth scrolling

Frame 2 (16-32ms):
├─ 16ms:  Resume LOW PRIORITY render
├─ 17ms:  Filter chunk 5/10 complete
├─ 18ms:  Filter chunk 6/10 complete
├─ 19ms:  Filter chunk 7/10 complete
├─ 20ms:  Filter chunk 8/10 complete
├─ 21ms:  shouldYield() = true
└─ 22ms:  PAUSE again

Frame 3 (32-48ms):
├─ 32ms:  Resume LOW PRIORITY render
├─ 33ms:  Filter chunk 9/10 complete
├─ 34ms:  Filter chunk 10/10 complete ✅
├─ 35ms:  Commit phase: Update DOM
└─ 36ms:  Results appear on screen!

The key: Input felt instant (4ms), while expensive work happened in background across 3 frames!

Now let's see what happens when you keep typing:

Frame 1 (0-16ms):
├─ 0ms:   Type "r"
├─ 1ms:   query = "r"
├─ 4ms:   Input shows "r" ✅
├─ 6ms:   Start filtering "" → "r" (LOW PRIORITY)
├─ 11ms:  shouldYield() = true, PAUSE
└─ 12ms:  Browser gets control back

Frame 2 (16-32ms):
├─ 16ms:  Resume filtering for "r"
├─ 20ms:  25% done with filter...
├─ 21ms:  shouldYield() checks for urgent work
│
├─ 22ms:  ⚡ User types "e" (HIGH PRIORITY!)
│         shouldYield() = true (urgent work detected!)
│
├─ 23ms:  ABANDON current render
│         Throw away partial "r" filter work
│
├─ 24ms:  query = "re" (HIGH PRIORITY)
├─ 25ms:  Input shows "re" ✅
├─ 26ms:  deferredQuery updates to "r"
│         (but immediately cancelled by "re")
│
├─ 27ms:  Start NEW filtering "r" → "re"
└─ 28ms:  shouldYield() = true, PAUSE

// Old "r" filter NEVER completes or shows!
// React intelligently skipped that intermediate state

React uses the browser's Scheduler API (with polyfill):

// Modern browsers (Chrome, Edge)
scheduler.postTask(() => {
  workLoopConcurrent();
}, { priority: 'background' });

// Fallback: MessageChannel for time slicing
const channel = new MessageChannel();
channel.port1.onmessage = () => {
  workLoopConcurrent();
};

function scheduleCallback(callback) {
  channel.port2.postMessage(null);
}

Why MessageChannel?

  • setTimeout(fn, 0) has 4ms minimum delay (too slow!)
  • requestAnimationFrame only runs before paint (wrong timing)
  • MessageChannel runs immediately after current task (perfect!)
function Dashboard() {
  const [metric, setMetric] = useState('revenue');
  const [isPending, startTransition] = useTransition();

  const switchMetric = (newMetric) => {
    startTransition(() => {
      setMetric(newMetric);
    });
  };

  return (
    <div>
      <Tabs selected={metric} onChange={switchMetric} />
      {isPending && <LoadingBar />}
      <ExpensiveChart metric={metric} />
    </div>
  );
}

function ExpensiveChart({ metric }) {
  const chartData = useMemo(() => {
    // This takes 80ms to compute
    const data = [];
    for (let i = 0; i < 10000; i++) {
      data.push({
        x: i,
        y: complexCalculation(metric, i)
      });
    }
    return data;
  }, [metric]);

  return <ChartLibrary data={chartData} />;
}

Frame timeline when switching from "Revenue" to "Profit":

Frame 1 (0-16ms):
├─ 0ms:   User clicks "Profit" tab
├─ 1ms:   metric = "profit" (TRANSITION priority)
├─ 2ms:   Tab switches to "Profit" ✅
├─ 3ms:   isPending = true
├─ 4ms:   LoadingBar appears ✅
│         User sees immediate feedback!
├─ 5ms:   Start chart re-render (LOW PRIORITY)
│         Calculate data point 0
├─ 6ms:   Calculate data point 1
├─ 7ms:   Calculate data point 2
│         ... (calculating in loop)
├─ 10ms:  Calculate data point 500
├─ 11ms:  shouldYield() = true
└─ 12ms:  PAUSE (used 5ms slice)
          Progress saved: at data point 500

Frame 2 (16-32ms):
├─ 16ms:  Resume chart calculation
├─ 17ms:  Calculate data point 501
├─ 18ms:  Calculate data point 502
│         ... (calculating in loop)
├─ 21ms:  Calculate data point 1000
├─ 22ms:  shouldYield() = true
└─ 23ms:  PAUSE again
          Progress saved: at data point 1000

// This continues across ~16 frames (80ms / 5ms per frame)

Frame 16 (240-256ms):
├─ 240ms: Resume chart calculation
├─ 241ms: Calculate data point 9998
├─ 242ms: Calculate data point 9999
├─ 243ms: All calculations complete! ✅
├─ 244ms: Commit phase: Update DOM
├─ 245ms: New chart renders
├─ 246ms: isPending = false
└─ 247ms: LoadingBar disappears

Total time: 247ms
But UI stayed responsive the entire time! 🎉

Think of React's time slicing like a restaurant kitchen:

Without Time Slicing (Old React):

  • Chef starts making a complex dish
  • New urgent order comes in (appetizer)
  • Chef: "Sorry, I have to finish this entrée first"
  • Customer waits 20 minutes for a simple appetizer 😡

With Time Slicing (Concurrent React):

  • Chef starts making complex entrée (5 min work)
  • After 30 seconds, checks: "Any urgent orders?"
  • Urgent appetizer comes in!
  • Chef: "Let me pause the entrée"
  • Makes appetizer immediately (2 min)
  • Returns to entrée
  • Both customers happy! 😊

Time slicing makes Suspense for data fetching smooth:

function ProfilePage({ userId }) {
  const [isPending, startTransition] = useTransition();

  const switchUser = (newId) => {
    startTransition(() => {
      setUserId(newId);
    });
  };

  return (
    <Suspense fallback={<Skeleton />}>
      <ProfileDetails userId={userId} />
    </Suspense>
  );
}

function ProfileDetails({ userId }) {
  const user = use(fetchUser(userId)); // Suspends

  // Heavy computation after data loads
  const stats = useMemo(() => {
    return calculateComplexStats(user);
  }, [user]);

  return <ProfileView user={user} stats={stats} />;
}

Timeline when switching users:

Frame 1 (0-16ms):
├─ 0ms:   Click "Switch User"
├─ 1ms:   userId = 2 (TRANSITION priority)
├─ 2ms:   Start render ProfileDetails
├─ 3ms:   Suspend! (waiting for data)
├─ 4ms:   Old profile STAYS VISIBLE (smooth!)
└─ 5ms:   Inline spinner shows

... Network request in flight ...

Frame 50 (800-816ms):
├─ 800ms: Data arrives! fetchUser(2) resolves
├─ 801ms: Resume ProfileDetails render
├─ 802ms: Start calculateComplexStats (expensive!)
├─ 807ms: shouldYield() = true
└─ 808ms: PAUSE calculation

Frame 51 (816-832ms):
├─ 816ms: Resume calculateComplexStats
├─ 821ms: Calculation complete!
├─ 822ms: Commit phase
└─ 823ms: New profile smoothly appears ✅

No jarring skeleton screen!
Old content stayed visible during load!

You can actually see time slicing in action:

function ExpensiveComponent({ data }) {
  // Log when rendering starts/pauses
  console.log('Render start:', performance.now());

  const result = useMemo(() => {
    const start = performance.now();
    const computed = expensiveComputation(data);
    const end = performance.now();
    console.log(`Computation took: ${end - start}ms`);
    return computed;
  }, [data]);

  console.log('Render end:', performance.now());
  return <div>{result}</div>;
}

// Console output:
// Render start: 0.5ms
// Render end: 1.2ms (fiber created)
// (pause - browser handles other work)
// Computation took: 50ms (spread across 10 frames!)
// (pause - browser handles other work)
// Render start: 52ms (commit phase)
// Render end: 53ms

Stop Thinking:

  • "React renders everything at once"
  • "Long computations always block the UI"
  • "I need to manually split work with setTimeout"

Start Thinking:

  • "React renders in 5ms chunks"
  • "Long work is automatically split across frames"
  • "Browser gets control back between chunks"
  • "UI stays responsive even during heavy work"

Many developers learn the HOW: "Use useDeferredValue and it makes things faster."

When you understand the WHY: "React works in 5ms time slices, yielding control back to the browser after each slice to maintain 60fps, and can pause/resume work at any fiber node," you gain insights that help you:

  • Understand why some operations feel instant
  • Know when concurrent features actually help
  • Debug performance with precise timing knowledge
  • Build UIs that feel professional and responsive

Quantum Brilliance Makes Devices That Keep Their Cool

Quantum Brilliance is making strides in quantum computing by developing diamond-based qubits that operate at room temperature.

The post Quantum Brilliance Makes Devices That Keep Their Cool appeared first on EE Times.

Can TFLN Make Photonic Compute Competitive?

A new photonic material, with better optical properties than silicon, could be the key to commercializing photonic compute

The post Can TFLN Make Photonic Compute Competitive? appeared first on EE Times.

NXP, Edge Impulse Chart Different Paths to the Same Edge AI Future

At The Things Conference 2025 startups and semiconductor giants—showed how different players are converging
on the same challenge: making AI practical and efficient at the edge.

The post NXP, Edge Impulse Chart Different Paths to the Same Edge AI Future appeared first on EE Times.


Disney sends cease and desist letter to Character.AI

Disney has demanded that Character.AI stop using its copyrighted characters. Axios reports that the entertainment juggernaut sent a cease and desist letter to Character.AI, claiming that it has chatbots based on its franchises, including Pixar films, Star Wars and the Marvel Cinematic Universe. In addition to claiming copyright infringement, the letter questioned whether these protected characters were being used in problematic ways in conversations with underage users.

"Character.ai's infringing chatbots are known, in some cases, to be sexually exploitive and otherwise harmful and dangerous to children, offending Disney's consumers and extraordinarily damaging Disney's reputation and goodwill," the letter said.

Character.AI has been subject to legal and government scrutiny multiple times already over concerns that it has not provided sufficient safety guards for minors. The platform has been implicated in failing to protect two different teenagers who discussed suicide with its chatbots and then took their own lives. It has also drawn the attention of the Federal Trade Commission and US Attorneys General.

For now, at least, the platform appears to be responsive to Disney's demands. "It's always up to rightsholders to decide how people may interact with their IP, and we respond swiftly to requests to remove content that rightsholders report to us," a representative said, per the Axios report. "These characters have been removed."

Disney has shown that it is willing to take legal action against AI companies. It sued Midjourney along with Universal Studios in June on allegations of copyright infringement.

This article originally appeared on Engadget at https://www.engadget.com/ai/disney-sends-cease-and-desist-letter-to-characterai-220204094.html?src=rss

The best E Ink tablets for 2025

I’m a longtime lover of pen and paper, so E Ink tablets have been intriguing to me ever since they started becoming more widely available. After having hundreds of half-filled notebooks over the years, I, at some point, turned to digital tools instead because it was just easier to store everything on my phone or laptop so I always had my most important information at my fingertips.

E-Ink tablets seem to provide the best of both worlds: the tactile satisfaction of regular notebooks with many of the conveniences found in digital tools, plus easy-on-the-eyes E-Ink screens. These devices have come a long way in recent years — now you can find them in multiple sizes, some have color E Ink screens and others double as full-blow ereaders with access to ebook stores and your local library’s offerings. I’ve tested out close to a dozen E Ink tablets over the past few years to see how well they work, how convenient they really are and which are the best tablets using E Ink screens available today.

Editor's note (September 2025): Amazon announced a revamped family of Kindle Scribe E Ink tablets. The Kindle Scribe 3 is thinner and lighter than its predecessor with faster page-turning and writing experiences. The Kindle Scribe Colorsoft is the first full-color addition to the lineup, with a pen that will support writing in 10 colors and highlighting in five different shades. Both new Scribe tablets will be available in the US "later this year." You can read our Kindle Scribe Colorsoft hands on to get a first look, but we'll update this guide once we've had the chance to test out both new E Ink tablets.

An E Ink tablet will be a worthwhile purchase to a very select group of people. If you prefer the look and feel of an e paper display to LCD panels found on traditional tablets, it makes a lot of sense. They’re also good options for those who want a more paper-like writing experience (although you can get that kind of functionality on a regular tablet with the right screen protector) or a more distraction-free device overall.

The final note is key here. Many E Ink tablets don’t run on the same operating systems as regular tablets, so you’re automatically going to be limited in what you can do. And even with those that do allow you to download traditional apps like Chrome, Instagram and Facebook, E Ink tablets are not designed to give you the best casual-browsing experience. This is mostly due to the nature of E Ink displays, which have noticeable refreshes, a lack of vibrant colors and lower picture quality than the panels you’ll find on even the cheapest iPad.

Arguably the biggest reason why you wouldn’t want to go with an iPad (all models of which support stylus input, a plethora of reading apps, etc) is because it’s much easier to get distracted by email, social media and other Internet-related temptations.

Arguably the most important thing to consider when looking for an E Ink tablet is the writing experience. How good it is will depend a lot on the display’s refresh rate (does it refresh after every time you put pen to “paper,” or at a different regular interval) and the stylus’ latency. Most of the tablets I’ve tested have little to no latency, but some are certainly better than others. Finally, you should double check before buying that your preferred E Ink tablet comes with a stylus, or if you need to purchase one separately.

How much will you be reading books, documents and other things on this tablet? E Ink tablets come in many sizes, but most of them tend to be larger than your standard e-reader because it makes writing much easier. Having a larger display isn’t a bad thing, but it might make holding it for long periods slightly more uncomfortable. (Most e-readers are roughly the size of a paperback book, giving you a similar feeling to analog reading).

The supported file types for e-books can also make a big difference. It’s hard to make a blanket statement here because this varies so much among E Ink tablets. The TL;DR is that you’ll have a much better reading experience if you go with one made by a company that already has a history in e-book sales (i.e. Amazon or Kobo). All of the titles you bought via the Kindle or Kobo store should automatically be available to you on your Kindle or Kobo E Ink tablet.

Also with Kindle titles, specifically, since they are protected by DRM, it’s not necessarily the best idea to try to bring those titles over to a third-party device. Unless the tablet runs an operating system like Android that supports downloads for apps like Kindle and Kobo, you’ll be limited to supported file types, like ePUB, PDF, MOBI, JPEG, PNG and others.

Most E Ink tablets have some on-device search features, but they can vary widely between models. You’ll want to consider how important it is to you to be able to search through all your handwritten notes and markups. I noticed in my testing that Amazon’s and Kobo’s E Ink tablets made it easy to refer back to notes made in books and files because they automatically save to the specific pages on which you took notes, made highlights and more.

Searching is less standardized on E Ink tablets that have different supported file types, but their features can be quite powerful in their own right. For example, a few devices I tested supported text search in handwritten notes along with handwriting recognition, the latter of which allows you to translate your scribbles into typed text.

While we established that E Ink tablets can be great distraction-free devices, most manufacturers understand that your notes and doodles aren’t created in a vacuum. You may want to access them elsewhere, and that requires some form of connectivity. All of the E Ink tablets I tried have Wi-Fi support, and some support cloud syncing, companion mobile apps and the ability to export notes via email so you can access them elsewhere.

None of them, however, integrate directly with a digital note taking system like Evernote or OneNote, so these devices will always be somewhat supplementary if you use apps like that, too. I’d argue that, if you already lean heavily on apps like OneNote, a standard tablet with a stylus and screen protector might be the best way to go. Ultimately, you should think about what you will want to do with the documents you’ll interact with on your E Ink tablet after the tablet portion is done.

E Ink tablets aren’t known for being cheap. They generally fall into the $300-$800 price range, which is what you can expect to pay for a solid regular tablet, too. A key factor in price is size: cheaper devices with E Ink displays are likely to have smaller screens, and stylus support isn’t as much of a given. Also, those types of devices are generally considered e-readers because of their size and may not be the best for note-taking, doodling and the like.

E Ink tablets have gone up in price recently. Supernote and Onyx Boox increased prices, as did reMarkable. The former said it was due to "increased costs,” and a reMarkable representative confirmed this to Engadget and provided the following statement: "We regularly review our pricing based on market conditions and operational costs. We've communicated an upcoming adjustment for the US market effective in May to provide transparency to our customers. Multiple factors influence our pricing decisions, including supply chain dynamics and overall operational costs in specific markets.”

As a result, the reMarkable Paper Pro jumped from $579 to $629 (that's for the bundle with the standard Marker and no Folio). This isn't great, considering the Paper Pro was already on the expensive side of the spectrum for E Ink tablets. It's also worth noting that Supernote and Onyx Boox have raised prices in the past few months as well.

The Boox Tab X C is a color-screened version of the Tab X, the company’s all-purpose e-paper Android tablet. The Tab X C has a lovely 13.3-inch Kaleido 3 E Ink color display, an octa-core processor, 6GB of RAM and it runs on Android 13, making it one of the most powerful tablets in Boox’s lineup. I’ve used the Tab X in the past and this color version runs similarly, if not better, and at 5.3mm thick, it’s impressively svelte even when you pair it with its folio keyboard case. As someone who loves legal-pad sized things to write on, I also like how the Tab X C is most akin to A4-size paper. But at $820 for the bundle with the standard case (or a whopping $970 for the tablet and its keyboard case), it’s really only best for those who are ready to go all-in on a premium E Ink tablet.

Lenovo made a solid E Ink tablet in the Smart Paper, but it's too pricey and too married to the company's companion cloud service to warrant a spot on our top picks list. The hardware is great, but the software isn't as flexible as those of competitors like the reMarkable 2. It has good Google Drive integration, but you must pair it with Lenovo's cloud service to really get the most use out of it — and in the UK, the service costs £9 per month for three months, which is quite expensive.

The Boox Tab Ultra has a lot of the same features we like in the Note Air 2 Plus, but it’s designed to be a true, all-purpose tablet with an E Ink screen. Running Android 11 and compatible with a magnetic keyboard case, you can use it like a standard 2-in-1 laptop, albeit a low-powered one. You can browse the web, check email and even watch YouTube videos on this thing — but that doesn’t mean you should. A standard 2-in-1 laptop with a more responsive screen and better overall performance would be a better fit for most people who even have the slightest desire to have an all-in-one device. Like the rest of Onyx’s devices, the Tab Ultra is specifically for those who put reading and eye comfort above all else.

This article originally appeared on Engadget at https://www.engadget.com/mobile/tablets/best-e-ink-tablet-130037939.html?src=rss

It looks like an M5 iPad Pro is coming very soon

Apple may be releasing a new iPad Pro with an M5 chip in the very near future, according to an unboxing video made by a Russian YouTuber. This is the same creator that leaked the 14-inch MacBook Pro with the M4 chip last year, so the information in the video is likely credible.

To that end, the creator unboxes what appears to be a new 13-inch iPad Pro with an M5 chip and 256GB of storage in a Space Black finish. The exterior design doesn't look noticeably different from current models, as the tablet still has a single rear camera, four speakers and a Smart Connector. 

Previous leaks had indicated that the next iPad Pro would feature a second front camera, but this video doesn't confirm that. It also looks like this new model is still plenty thin.

The video even puts the tablet through some testing. A Geekbench 6 benchmark shows a 12 percent increase in multi-core CPU performance when compared to the previous generation. This benchmark result suggests a 36 percent faster GPU. It also indicated that the 256GB model of this tablet will include 12GB of RAM. Current models with 256GB of storage ship with just 8GB of RAM.

The footage shows that this tablet is running iPadOS 26, which makes sense, and that the battery was manufactured in August of this year. This could all be a ruse but, again, the leaker has been proven correct in the past. It's likely that Apple will announce the refreshed iPad Pro with the M5 chip sometime in October, which tracks with previous reporting.

It was also recently reported that the company is working on a refresh of the MacBook Pro laptop with the M5 chip. These computers could be available later this year.

This article originally appeared on Engadget at https://www.engadget.com/mobile/tablets/it-looks-like-an-m5-ipad-pro-is-coming-very-soon-184406117.html?src=rss

Everything announced at Amazon's fall hardware event

It's not technically Techtober yet since we’re one day shy, but we've already had a bunch of fall hardware events from some of the bigger companies in the tech space. Today, it was Amazon's turn to step up to the plate

Going into its event, the company teased new Echo speakers and Kindle news. Rumors suggested Amazon was ready to ditch its long-standing Android-based OS on Fire TVs in favor of the Linux-based Vega OS it's already using on the Echo Show 5, Echo Hub units and Echo Spot.

Indeed, Echo, Kindle and Fire TV are all being featured at the event, along with Ring and Blink devices. Oh, and lots of Alexa+ updates, of course.

Amazon doesn’t usually livestream its product events and that remained the case here. However, we’ve got you covered with all the news and announcements with both our liveblog and this here rundown of everything Amazon announced at its fall hardware event:

Echo speakers
Echo speakers
Amazon

The Echo lineup was beyond overdue for a refresh — it's been five years since the 4th-gen Echo arrived, while the most recent Echo Studio debuted a couple of years later. And, with Amazon looking to push Alexa+, it's certainly time for some new models.

To that end, the $100 Echo Dot Max and $220 Echo Studio are up for pre-order and will ship on October 29. No sign of a new standard Echo this time!

The Echo Dot Max delivers almost three times the bass of the fifth-gen Echo Dot and sound that adapts to your space, Amazon claims. The company added that the updated design integrates the speaker directly into the device’s housing, freeing up extra space for more bass. In fact, the Echo Dot Max has two speakers: a “high-excursion woofer optimized for deep bass and a custom tweeter for crisp high notes.”

Amazon has shrunk down the Echo Studio to 60 percent of the size of the last version. Even so, it has a “powerful high-excursion woofer that delivers deep, immersive bass and three optimally placed full-range drivers to create immersive,” room-filling sound, according to the company. The latest model supports spatial audio and Dolby Atmos.

If you’re in the US and you snap up either of the new Echo speakers — or the latest Echo Show devices — Amazon says you’ll get early access to Alexa+. We’ve had a chance to try the speakers, so be sure to check out Engadget senior reporter Jeff Dunn’s first impressions.

Amazon is looking to take on the likes of Sonos with a home theater feature. You’ll be able to connect as many as five Echo Studio or Echo Dot Max devices to a compatible Fire TV stick for surround sound.

The company says that, with the Alexa Home Theater feature, Alexa will take care of everything after you plug in your speakers. That includes tuning them for your space automatically. Amazon will sell the Echo speakers in Alexa Home Theater bundles too.

2025 Echo Show smart displays
2025 Echo Show smart displays
Amazon

Quelle surprise, Amazon has refreshed the Echo Show smart displays. As with the rest of its new products, Amazon built the Echo Show 8 and Echo Show 11 with Alexa+ in mind.

They boast new front-facing stereo speakers and upgraded microphones, all the better to bolster the chats you might have with Alexa+. The new units have improved cameras with 13MP lenses. Alexa will be able to recognize you when you approach the device and display personalized information. It might show you, for instance, an AI-powered summary of footage from your Ring devices. The Echo Show smart home hub supports devices in the Zigbee, Matter and Thread ecosystems too.

As for the display, both of the new Echo Show units have a negative liquid crystal screen designed to maximize viewing angles. Amazon also said there are new color-coded calendars to help everyone in the family to stay on top of their schedules. Alexa will keep an eye out for scheduling conflicts. Such a clever cookie.

The new Echo Show 8 costs $180, while the Echo Show 11 will run you $220. Pre-orders for the latest Echo Show models open today. They’ll ship on November 12.

The new Echo Show 8 costs $180, while the Echo Show 11 will run you $220. Pre-orders for the latest Echo Show models open today. They’ll ship on November 12. Be sure to check out Engadget senior writer Sam Rutherford’s initial impressions of the latest models.

Kindle Scribe Colorsoft
Kindle Scribe Colorsoft
Amazon

The Kindle Scribe 2 and Kindle Colorsoft appear to have been smushed together, as there's now a full color version of Amazon's writing tablet (which has some other upgrades). The company is using its custom display tech for the Kindle Scribe Colorsoft, which has a color filter and “light guide” with nitride LEDs. The idea, according to Amazon, is to boost color without washing out details.

The company says it developed a new rendering engine for the Kindle Colorsoft too. It claims this helps make sure writing on the device feels fluid, natural and fast. Moreover, the Kindle Scribe Colorsoft is said to run for several weeks on a single charge. 

You'll be able to choose from 10 pen colors for writing, drawing and annotation. There are five highlighter colors as well.

The Kindle Scribe Colorsoft will be available in the US later this year, starting at $630. It's coming to the UK and Germany in early 2026. 

We’ve been able to try out the Kindle Scribe Colorsoft. You can check out Engadget managing editor Cherlynn Low’s initial hands-on impressions.

Kindle Scribe
Kindle Scribe
Amazon

Amazon is refreshing the regular Kindle Scribe too. It has a larger, 11-inch display to match the proportions of a sheet of paper. It's 5.44mm (0.2 inches) thin and weighs 400g. Amazon also says it's 40 percent faster than the previous model when it comes to page turns and writing. 

The standard 2025 Kindle Scribe shares a bunch of features with the Colorsoft model. Both boast a front light system with miniature LEDs, a texture-molded glass that's designed to improve friction for writing and revamped display tech that's said to make it feel like you're writing directly on the page.

The latest devices have a quad-core chip and more memory than previous models. That helps to power new AI-driven features. You'll be able to get an AI-generated summary of information that you search for across your notes and the option to ask follow-up questions. Starting in early 2026, you’ll be able to send notes and other docs from your Kindle Scribe to Alexa+, and have a conversation with the chatbot about them.

There's support for Google Drive and Microsoft OneDrive, so you can pull in documents from there to mark them up. There's an option to export annotated PDFs, as well as to export notes as converted text or an embedded image to OneNote. 

The home screen has a new Quick Notes function to help users start jotting down their thoughts faster. You’ll have swift access to recently opened or added books and documents from there too. 

Meanwhile, there's a new pen that attaches to your Kindle Scribe. This refreshed Kindle Scribe will go on sale in the US by the end of the year, and it starts at $500. A version without a front light will be $430. Again, these models will be available in the US and Germany in early 2026.

Amazon Fire TV
Amazon Fire TV
Amazon

The image Amazon sent out as part of its event invite included the corner of a television, hinting that Fire TV would get some time in the spotlight during today's event. And, yup, that turned out to be the case.

There's a new 4K streaming stick called the Fire TV Stick 4K Select ($40). Amazon says it supports HDR10+ and your favorite streaming services. Support for Alexa+, Luna and Xbox Cloud Gaming is on the way too. As with the other Fire TV devices Amazon announced today, pre-orders are open and the Fire TV Stick 4K Select will ship next month.

If you’d rather have the Fire TV ecosystem baked into your television, Amazon’s got you covered there. The latest Omni QLED Series models have displays that are 60 percent brighter than previous versions, Amazon says. The TVs adjust their display colors automatically depending on the ambient lighting and can turn on by themselves when they detect your presence. The TV can also display your photos or artwork and switch off when you exit the room. The Omni QLED Series TV come in 50-inch, 55-inch, 65-inch and 75-inch variants and start at $480.

The Omnisense feature is available on the latest 2-Series Fire TV models too. These budget-friendly 4K options are said to be 30 percent faster than their predecessors. A Dialogue Boost feature will be present on all the latest Fire TV models. You can snap up a 2-Series Fire TV in either a 32-inch or 40-inch variant, starting at just $160.

Janko Roettgers of LowPass reported last week that Amazon was set to bring its Vega operating system to Fire TV by the end of this year. Whaddya know? The company confirmed that it's bringing Vega to Fire TVs and streaming devices, including the 4K Select. So, it’ll debut in October on at least one device. Amazon didn’t say when it would roll out the OS more broadly, but helpfully noted that Vega is “responsive and highly efficient.”

2025 Blink camera lineup
2025 Blink camera lineup
Amazon

No, you didn’t miss it: there are new Blink devices as well. All of them can capture 2K video, and pre-orders for all three go live today.

Amazon says the $90 Blink Outdoor 2K+ has a 4x zoom, two-way talk with noise cancellation, enhanced low-light performance and, for Blink Plus subscribers, smart notifications when people and vehicles are detected.

The $50 Blink Mini 2K+ is primarily designed for indoor use, but you can place it outside thanks to a weather-resistant power adapter. Otherwise, it has the same features as the Blink Outdoor 2K+.

Blink had an entirely new device to show off as well. The Blink Arc looks quite odd, almost like a pair of goggles. It houses two Blink Mini 2K+ cameras and combines the footage into “a single, seamlessly stitched feed.” If you have a Blink Plus subscription, you’ll have access to a 180-degree view. The Blink Arc can also be used outside with the weather-resistant power adapter. It costs $100, and the mount is an extra $20.

Ring Doorbell
Ring Doorbell
Amazon

Retinal Vision is a concept that Ring has built its latest devices (for what it’s worth, the name reminds me I’m probably due for an eye exam). The idea is to use AI to optimize image quality. It taps into back-side illumination sensors to deliver superior low-light performance, Amazon says.

A function called Retinal Tuning samples your Ring camera's video quality several times per day for up to two weeks in a attempt to improve it. Large-aperture lenses in the new devices will help with all of that.

To that end, Amazon has announced a Wired Doorbell Plus with 2K visuals for $180 and the Indoor Plus Cam 2K for $60. There are 4K models too: Outdoor Cam Pro 4K ($200), Spotlight Cam Pro 4K ($250), Wired Doorbell Pro 4K ($250) and Floodlight Cam Pro 4K ($280). Pre-orders for all of them open today.

Of course, there are Alexa+ features for the new cameras. Alexa+ Greetings is a function that will enable the AI to make "informed decisions about how to greet certain visitors." Amazon will roll this out for the new cameras in December.

Familiar Faces, meanwhile, is a facial recognition feature. It identifies known faces, so Ring will be able to notify you when they’re at your door (or if someone unfamiliar is there). That’s coming in December too.

There’s another new feature called Search Party, which Amazon says is about helping people find lost dogs. When a neighbor reports a missing pooch in the Ring app, a Search Party commences on nearby Ring cameras. These will keep a look out and notify camera owners if they spot what may be the missing dog. The camera owner will then see a photo of the pet alongside relevant camera footage, and can then choose whether to alert the dog’s owner. Search Party will roll out in November.

Zero prizes for anyone who guessed that Amazon was going to talk up Alexa+ features. That one was a gimme. All of the devices Amazon just announced will support Alexa+ out of the box.

AI features for books are coming to the Kindle Scribe devices and other compatible Kindles in early 2026. The Kindle iOS app will be the first to gain access later this year. Amazon says the Story So Far option will catch you up on everything you've read in a book to that point without any spoilers — which could be handy if you're returning to a digital tome after a break. With the Ask this Book option, you'll be able to highlight any text, ask questions about it and get "spoiler-free answers." Amazon says thousands of Kindle books will support these features.

On Fire TV devices, Alexa+ will be able to find scenes in movies using natural language prompts. You'll be able to ask the assistant to find a scene where a certain thing happens and it will try to find that for you. This feature is coming soon.

You’ll be able to ask the voice assistant to find a show like one you watched a couple of nights earlier, a family-friendly movie or something that features your favorite performer. This isn’t limited to Prime Video as Alexa+ can tap into a variety of streaming services, including Netflix and HBO Max.

You can ask the assistant questions about what you’re watching too, such as details about an actor (handy if you recognize them from another show or movie but you’re not sure what) and behind-the-scenes info. This works for live sports as well, so you can find out stats and other nuggets about what you’re watching on Prime Video, Sling TV, DirecTV and Fubo.

On the new Echo Show devices, there’s an Alexa+ shopping widget. From here, you’ll be able to keep tabs on your Amazon, Whole Foods and Amazon Fresh deliveries; access detailed info on products; and re-order items with a voice command or a tap.

Amazon claims Alexa+ can help you figure out what gift to get someone based on responses to questions it asks you. The assistant will offer personalized recommendations from Amazon.

Alexa+ is going to hook into all manner of devices and services. Through the Alexa+ Store (which will be available soon), you’ll be able to access services from the likes of TaskRabbit, Fandango, Priceline, Uber, Lyft, Thumbtack, GrubHub and Yahoo Sports. You can manage your various Amazon subscriptions via Alexa+ too.

In addition, Alexa+ is coming to speakers, TVs and in-car systems from other brands. Those include Bose, Sonos, LG, Samsung and BMW.

As things stand, Alexa+ is currently free with Prime. Non-Prime members can use it for $20 per month — but you may as well pay $15 per month or $139 per year for Prime if you really, really want access to Alexa+.

undefined

Amazon had another product to unveil today, but this one wasn’t highlighted during the event. The company has revealed a $20 smart remote for Echo devices. Pre-orders are open and it’ll ship on October 30.

You can use the Alexa app or Alexa+ to customize the Smart Dimmer Switch and Remote. There are four buttons to which you can map individual actions (like making a change to your smart lights) and multi-stage routines. Amazon might also suggest routines for you to set up based on your habits. As well as using this device as a traditional remote, you can attach it to wall, which might be the way to go if you’re going to use it primarily for managing your lights.

This article originally appeared on Engadget at https://www.engadget.com/home/everything-announced-at-amazons-fall-hardware-event-143557140.html?src=rss

Amazon Echo Studio and Echo Dot Max hands-on: More bass, round shapes

Among the horde of new devices Amazon unveiled during its New York City event on Tuesday are two new Echo speakers: a higher-end Echo Dot called the Echo Dot Max and a next-generation Echo Studio with a new ball-shaped design. Both are available to pre-order starting today, with shipping to start on October 29. The Dot Max costs $100 — well above the standard Dot (which remains available) — while the Studio is priced at $220.

The Echo Dot Max looks to be Amazon’s answer to Apple’s HomePod mini, which is similarly compact yet touts quality sound for its size. The company says the new speaker offers “nearly three times” as much bass response as the cheaper Echo Dot. That’s largely because it’s been redesigned on the inside to include two speakers — a woofer and a custom tweeter — instead of one and to increase the amount of internal air space.

The new Echo Studio, meanwhile, gets a fairly major design overhaul. It essentially looks like a bigger version of Echo Dot Max, with the old cylindrical design replaced by a spherical shape. Amazon says it’s 40 percent smaller than the last one, with the goal being to make it easier to stick the device in varying locations around the house. This one is built with three full-range drivers alongside a woofer, and it supports both Dolby Atmos and spatial audio with services that offer that (such as Apple Music).

Three Amazon Echo Dot Max speakers, one purple, one white and one black, are displayed on a white wooden table.
The Amazon Echo Dot Max.
Sam Rutherford for Engadget

The Studio’s smaller footprint could be handy if you want to take advantage of the new Alexa Home Theater mode. This lets you turn up to five new Echo Studios or Echo Dot Maxes into a surround sound setup for your TV, sort of like an Alexa-fied version of the room calibration tech Sonos offers with its home speakers. If you have compatible gear, Amazon says the voice assistant will automatically locate the different speakers in your room and map out an appropriate acoustic profile. An Amazon representative told us that you need a Fire TV Stick 4K or 4K Max streamer for this to work for now, however, since Alexa uses your TV’s location to determine where the front of the surround system is, then uses that in tandem with your speakers’ locations to estimate where you’re sitting. This whole process takes “less than five minutes,” according to the company, and it’ll auto-adjust if you add in more speakers. You won’t be able to mix and match Studios and Dot Maxes in one setup at launch, however — it has to be all of one speaker or the other.

I was briefly able to check out and listen to the new speakers at the event through a controlled demo. I wouldn’t say either looks particularly “premium” at first blush, but the spherical designs are clean and simple, and the knitted fabric surrounding the hardware feels firm and sturdy. Neither strays too far from the traditional Echo aesthetic; you could pop them on a counter or TV stand and they won’t draw much attention to themselves. Of note, the volume/mic control buttons and Alexa light ring are now angled on the front of each device, which may make quickly adjusting things a little bit quicker.

As for how the two speakers actually sound, I have to reiterate that my demo was highly controlled, i.e. orchestrated to make the new speakers sound as good as possible. I wasn’t able to pick a song, adjust volume or actually talk to Alexa myself. (Though Amazon says there are new chips and mic arrays to improve conversation detection.)

The new Amazon Echo Studio sits on a brown shelf.
The Amazon Echo Studio.
Sam Rutherford for Engadget

With that said, the Echo Dot Max did indeed produce more bass thump and clearer separation than the cheaper Dot in a side-by-side comparison using Fleetwood Mac’s “Dreams.” It better, given the price, but it sounded far less “closed-off” all the same. The Echo Studio was a marked step up from there, producing a much wider soundstage, more impactful bass and more natural highs. Again, take all of this with a grain of salt, but I wouldn’t be surprised if we said it’s worth the premium for audio-focused buyers when we test it ourselves. An Amazon rep said this new Studio model isn’t noticeably louder than the last one, though, which isn't surprising given how much more compact it is. Instead, the focus is on the smaller frame and a “richer” sound. To that end, both devices seemed to go for a slightly more bass-heavy profile than a neutral one, based on my limited listen.

I was also able to listen to four Echo Studios paired in tandem. Predictably, this setup filled the room with sound and delivered more precise imaging, whether we were listening to an ambient soundscape of birds chirping in the woods or an action-heavy scene from Ready Player One. I do question the value, though: You’re getting close to the $1,000 range with four of these things, and at that point, many people may be better off just getting a decent soundbar and a dedicated subwoofer for fuller bass.

Naturally, Amazon says both speakers — along with the new Echo Show 8 and Echo Show 11 — are designed with Alexa+ in mind, and anyone who buys either device will be able to use the upgraded assistant in early access. Both are still likely to be more niche than the less expensive Echoes, given that many people still use these things for simpler smart home tasks and basic listening. But for those who’ve grown accustomed having an Echo around the house and are willing to pay for improved sound quality, there may be enough to like here. We’ll know more clearly when we’re able to test them on our own.

This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/amazon-echo-studio-and-echo-dot-max-hands-on-more-bass-round-shapes-182339624.html?src=rss

Imgur has left the UK

Imgur has shut off its image-hosting platform for users in the UK, displaying a “content not available in your region” notice across the site and on third-party embeds. The move comes after the UK Information Commissioner’s Office (ICO) warned that it intended to levy fines against Imgur’s owner, MediaLab AI, after an investigation into the service's handling of children’s data, age verification and privacy protection. Exact details of the fine, or what the findings of the investigation were, have not been shared.

"We are aware of reports that the social media platform Imgur is currently not available in the UK. Imgur's decision to restrict access in the UK is a commercial decision taken by the company," said regulators in a statement. They also stressed that "exiting the UK" does not mean a company can avoid any levied penalties, and that the investigation is ongoing.

"Our findings are provisional and the ICO will carefully consider any representations from MediaLab before taking a final decision whether to issue a monetary penalty," said regulators.

In recent years, the ICO has stepped up enforcement of its policies governing data privacy for minors. In 2023, the watchdog fined TikTok $15.8 million for what it said were several violations of data protection laws. The regulator alleged that in 2020 TikTok allowed as many as 1.4 million children under the age of 13 to use the app, against its own policies. TikTok found itself under investigation yet again this year over similar alleged violations. The ICO also previously raised concerns surrounding a Snapchat generative AI chatbot named My AI, alleging that it placed children's privacy at risk.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/imgur-has-left-the-uk-181715724.html?src=rss

Just Cause developer Avalanche Studios is the latest game company to announce layoffs

Avalanche Studios is following in the footsteps of so many game developers this year. Today, the company posted a notice to its website announcing plans for a restructuring. Avalanche said it will close its studio in Liverpool, impacting all staff members in the city. The company said it will also "reduce our workforce and restructure the teams" at its studios in Malmo and Stockholm, but did not provide specifics about the scope of those layoffs.

Although the statement simply credited the decision to "current challenges to our business and the industry," it's hard not to think that the cancellation of Avalanche's planned game Contraband had something to do with the current need to consolidate. Microsoft ended active development on the project in August during the fallout from the massive layoffs the tech giant announced over the spring and summer. Those cuts appeared to impact the fate of many other upcoming games and game studios that were working with Microsoft as either a developer or a publisher.

Since we won't get to know them for Contraband, Avalanche Studios will remain best known for its Just Cause games of open-world mayhem for now. Contraband is the only game currently listed as a forthcoming title on the company website, so it's unclear what the next moves for the remaining team members will be. The notice closes by saying, "Despite these changes, we remain deeply committed to providing amazing games to our passionate player communities." Hopefully they'll be able to bounce back.

This article originally appeared on Engadget at https://www.engadget.com/gaming/just-cause-developer-avalanche-studios-is-the-latest-game-company-to-announce-layoffs-180048615.html?src=rss

Amazon Echo Show 8 and Show 11 hands-on: A cuter, more unified smart display

It's been a couple years since the Echo 8 has gotten an update and even longer for the aging Echo Show 10. But today Amazon is fixing that with two brand-new smart displays: The fourth-gen Echo Show 8 and Echo Show 11.

Right away, the first thing you notice about Amazon's refreshed lineup is their designs. In front, there's a slim tablet-based HD display (either 8.7 inches or 11 inches depending on the model). Around back there's a curvier housing covered in a mesh fabric for the display's internals and speakers that borrows a lot from the new Echo Studio and Echo Dot Max. This is a pretty big departure from Amazon's wedge-shaped predecessors and I think it's a success. Both models look more elegant and refined, while their rounded bases make it easier to angle them properly in whatever room they're in. 

The design of the Echo Show 8's rear housing is clearly inspired by Amazon's recently updated Echo Dot Max.
Sam Rutherford for Engadget

That said, while both models feature new 13MP cameras with auto-framing tech (meaning they can track your face if you need to move around the room while on a video call), neither version has a built-in motor that would allow the entire display to rotate and spin like you got on the old Echo Show 10. I suspect this is a tacit admission by Amazon that a movable display is a bit of a gimmick, at least on a smart display. Or it's just not super necessary when you can just have the device's camera re-compose your video framing dynamically in software. 

Elsewhere, there are a few handy physical controls for volume located on the right side of the Echo Shows' displays along with a toggle for disabling the onboard mics and camera. Aside from that, there's a single barrel plug in back for power (which is slightly annoying, I wish it was USB-C) and not much else. So if for some reason you want to connect the new Echo Shows to wired internet, you're going to need to get pretty creative. 

Unfortunately, I didn't get a chance to see how the audio on the new Echo Show models compares to the refreshed Echo Studio or Dot Max. However, Amazon's updated displays are a big improvement. They have huge viewing angles so it's never hard to see what's on the screen from wherever your standing. And while Amazon hasn't provided official brightness figures, based on what I've seen, the panels are rather vibrant, so there shouldn't be any major issues viewing things in sunny rooms. 

Amazon's refreshed UI is also rather straightforward. All you need is a couple taps or swipes to open things like the video tab, music controls, settings and a list of upcoming calendar events. Meanwhile, the addition of Amazon's AZ3 Pro chip has greatly improved the responsiveness of touch and gesture input to the point where it felt a bit faster than the Google Nest Hub Max I have at home. 

The updated UI on the fourth-gen Echo Show.
Sam Rutherford for Engadget

Of course, the real impact of the new Echo Shows is yet to be seen, because while updated hardware is nice, the real value of these smart displays is how they are now better positioned to be the center of Amazon's smart home ecosystem. Both devices support Zigbee, Matter and Thread so it should be easy to use them to control other devices, while features like a Wi-Fi radar enables a wider range of contextual interactions from Alexa. And while I think the ability to create routines and automations strictly using your voice is a major upgrade for the average user, I wasn't able to test that functionality out myself at the event. 

The other potential omission is that while Amazon's two largest smart displays got much needed refreshes today, the same can't be said for the Echo Show 5. So while that model continues to be on sale, I wouldn't be surprised if it got discontinued when supply runs out or re-imagined as something closer to a smart alarm clock sometime in the future, as it's smaller screen makes it's role as a smart home hub a bit more limited. 

The new Echo Show ($180) and Echo Show 11 ($220) are available for pre-order today and will come with early access to Alexa+ before official sales begin on November 12. 

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/amazon-echo-show-8-and-show-11-hands-on-a-cuter-more-unified-smart-display-173918264.html?src=rss

OpenAI's Sora app is real but you'll need an invite to try it

Well, that was fast. One day after Wired reported that OpenAI was preparing to release a new AI social video app, the company has revealed it to the wider world. It's called the Sora app, and it's powered by OpenAI's new Sora 2 video model, allowing it to generate AI-made clips of nearly anything. As expected, the app's signature "cameo" feature allows people to add your likeness to videos they generate. 

Cameos are likely to be controversial, even if OpenAI is giving users a lot of control over whether someone can replicate their likeness in clips Sora generates. When you first start using the app, you can allow your friends (and even strangers) permission to generate images of you. Whenever someone uses your likeness in a video Sora will designate you as the "co-owner" of that clip, allowing you to later delete it or prevent others from further modifying the video with subsequent generations. The latter plays into Sora's "Remix" feature, which allows users to jump on trending videos to offer their own take on them. Sora 2 can generate sound alongside video, a first for OpenAI's model.

Separately of the above restrictions, Sora can't generate videos of public figures — unless they upload their likeness to the app and grant their friends or everyone permission to use it in their creations — and the software will refuse to make pornographic content.    

Right now, Sora is only available on iOS, with no word yet on when it might arrive on Android, and you'll need an invite from the company. However, those lucky few who can join are able to invite four friends to download the software, much like the early days of say, Bluesky or Clubhouse (lol). OpenAI is only making Sora available to people in the US and Canada (sorry, everyone else).

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-sora-app-is-real-but-youll-need-an-invite-to-try-it-171849671.html?src=rss

Survival climbing game Cairn has been delayed until 2026

The survival climbing game Cairn has been delayed until the first quarter of 2026. Development studio The Game Bakers had previously announced a release date for November 5 of this year.

The delay is so the team can spend more time on "optimization, debug and polish." Creative director Emeric Thoa said that "after 5 years of work, it makes no sense to rush it" as "we want to be proud of the game we launch." Delays are never fun, but they are a whole lot better than buying a broken game at launch.

We don't have an actual release date yet, but the game's still coming to both PC and PS5. There is a demo available, which has racked up 600,000 players on both platforms. The Game Bakers are beefing up that demo on October 13, adding ghost recordings of speedrunners and staffers. The company says these ghosts can be followed to "check new techniques or discover new routes and hidden areas." Mario Kart and other racing games have been doing something similar for years.

For the uninitiated, Cairn is a tough-as-nails rock-climbing game with a free solo mode for added difficulty. There's no UI feedback, so players have to pay attention to the avatar's breathing and body language. It feels like a more intense cousin of the peaceful Jusant, which is another rock-climbing sim.

This article originally appeared on Engadget at https://www.engadget.com/gaming/survival-climbing-game-cairn-has-been-delayed-until-2026-171512591.html?src=rss

Amazon has a new smart remote that's completely programmable by Alexa+

Amazon may have just unveiled a ton of new products across its Ring, Blink, Echo and Kindle categories, but it still had one more piece of hardware to show. Though it didn't get mentioned during the company's Devices and Services event earlier today, there is a new Smart Remote under the Amazon Basics brand that will be available for pre-order for $19.99 and will ship in October. According to the product listing page, it will be released on October 30.

At first glance, the Smart Remote looks like a regular switch that you mount on your wall to control your lights or other appliances. Its full name on Amazon's current pre-order page even says it's a "Smart Dimmer Switch and Remote." It basically has four buttons that you can configure via the Alexa app or ask Alexa+ to map routines to. During a demo at the event space, an Amazon representative told an Echo Show "Alexa, when I press the top button I want you to activate the party time scene and play "Alive" by Pearl Jam.

The assistant acknowledged the request and within 10 seconds said it had completed the task. The rep pressed a button and lights in the demo room came on, while the song started playing on the Echo Show.

You can also use the Routines section of Amazon's app to customize what you want the device to do. In the same demo, the company also showed how the assistant can suggest routines based on your habits. It can also remind you to, say, take out the trash if you've connected a Ring camera and it's noticed patterns in which day of the week your garbage is removed from the street.

The battery-powered remote can be mounted on a wall or surface and can be magnetically attached for maximum convenience. Though a simple remote might not be the most exciting thing, especially at an event where Amazon's voice assistant and AI were so widely talked about, it's still something people might find useful. Particularly if you want to just press a button to trigger a series of actions instead of finding the exact words to say in the precise pronunciation that is required to be understood by your smart speaker. 

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/amazon-has-a-new-smart-remote-thats-completely-programmable-by-alexa-170021739.html?src=rss

Microsoft's Windows 11 2025 update starts rolling out today (but don't expect anything new)

What good is an update if it doesn’t actually add anything new? That’s the question I’m left asking about Microsoft’s Windows 11 2025 update (AKA Windows 11 25H2), which the company will begin pushing out today. Instead of adding any major new capabilities, Microsoft says it’s an “enablement package” that includes recent features added to last year’s 24H2 release. If anything, it could be a push for stragglers who’ve ignored recent updates to get onboard with new Windows 11 capabilities.

Microsoft says the Windows 11 2025 update also “includes significant advancements in build and runtime vulnerability detection, coupled with AI assisted secure coding.” Additionally, the new update should be leaner than before, thanks to the removal of PowerShell 2.0 and Windows Management instrumentation command-line (WMIC). Those are legacy features most users have never touched, but their removal could annoy power users and IT admins who have built ancient scripts using PowerShell 2.0.

As usual, Microsoft says it will start delivering the Windows 11 2025 update to users in waves. The first batch includes users with 24H2 devices who’ve turned on “Get the latest updates as soon as they’re available” in Windows Update. Now that Microsoft has moved to a frequent update cadence, you can expect to see actual new features for Windows 11 25H2 arriving in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/computing/microsofts-windows-11-2025-update-starts-rolling-out-today-but-dont-expect-anything-new-170005064.html?src=rss

The best October Prime Day robot vacuum deals you can get now: Save on machines from iRobot, Shark, Dyson and others

It's frankly amazing how good vacuum cleaners are these days. Once the laughingstock of the gadget world with their dusty bags and tiny wheels, today's vacuums are sleek dirt-destroying machines, capable of rendering a house habitable no matter how many cats live in it. Some of them are even robots that will do the cleaning for you. For October Prime Day, Amazon has steeply cut the prices of some of the best vacuums (and some pretty good ones alongside). Now is a fantastic time to upgrade your cleaner, so check out the list below for our best recommendations.

iRobot Roomba 104 Vac for $150 (40 percent off, Prime exclusive): This entry-level Roomba is a good pick for anyone who's new to owning a robot vacuum. It features a multi-surface brush and an edge-sweeping brush to clean all types of flooring, and it uses LiDAR navigation to avoid obstacles as it goes. The iRobot mobile app lets you control the robot, set cleaning schedules and more.

Shark Matrix Plus 2-in-1 for $300 (57 percent off, Prime exclusive): The Shark Matrix Plus takes the robot vacuum concept even further by working a mop into the design for hands-off wet cleaning. This model is self-cleaning, self-emptying, self-charging and capable of tackling ground-in stains on hard floors.

Shark Navigator Lift-Away Deluxe for $160 (27 percent off): Moving into manual vacuums, let's start with one of the best. The Shark Navigator Lift-Away is a champion at getting deeply ingrained crud out of carpets, but it's also capable of squaring away bare floors. You can switch between the two settings quickly, and the lift-away canister makes it easy to empty.

Levoit LVAC-300 cordless vacuum for $250 ($100 off, Prime exclusive): One of our favorite cordless vacuums, this Levoit machine has great handling, strong suction power for its price and a premium-feeling design. Its bin isn't too small, it has HEPA filtration and its battery life should be more than enough for you to clean your whole home many times over before it needs a recharge.

Dyson Ball Animal Total Clean Upright Vacuum for $500 (24 percent off): Dyson is still the king of reinventing vacuums, and the bagless, hyper-maneuverable Ball Animal is a blast to use. The Ball design is based on ease of steering, but the hidden MVP is the sealing — from the head to the canister, not a hair is getting out of this one once it's in.

Amazon Basics Upright Bagless Vacuum Cleaner for $55 (21 percent off): All right, nobody goes to Amazon Basics to be impressed, but we have to admit this vacuum exceeds expectations. It's light, it has a big dust reservoir and it comes with all the attachments you'll need for a reasonably sized apartment. The filter is also simple to remove and clean.

Black+Decker QuickClean Cordless Handheld Vacuum for $27 (33 percent off): Rounding out the list, we've got this small-but-mighty hand vacuum, perfect for crevices, shelves or cleaning out your car. It weighs about 1.4 pounds and hoovers up small messes in the blink of an eye. The lithium-ion battery stays charged for up to 10 hours.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-best-october-prime-day-robot-vacuum-deals-you-can-get-now-save-on-machines-from-irobot-shark-dyson-and-others-151504093.html?src=rss

The best October Prime Day deals on Anker charging gear and other accessories

You may not be looking to spend big on tech this October Prime Day, but it's still a good idea to look for tech essentials during the shopping event while you can get some at good discounts. Anker makes some of our favorite charging gear and I always end up picking up an accessory or two during Prime Day to ensure I have what I need when I need it most, and I feel better knowing I didn't spend full price on it.

For example, in sales past, I picked up a couple of extra USB-C charging cables so I could keep one in my carry-on luggage so I always have one when I travel. My partner will likely be upgrading to an iPhone 17 this year, so we'll have to get a few more USB-C cables now that Lightning is officially banished from our home. Also, every year it seems I need yet another surge protector, so even though I picked one up the year before — but one can never have too many. Here, we've collected all of the best October Prime Day deals on Anker devices and other charging gear we could find, and we'll update this post as the event goes on with the latest offerings.

Power banks are not as straightforward as you might think. They come in all shapes, sizes and capacities and can have extra features like magnetic alignment, built-in kickstands, extra ports and more.

It's worth considering how you'll use a power bank before you decide on the right one to buy. Smartphones don’t need huge-capacity bricks to power up a couple of times over; a 5K or 10K portable charger should be plenty if that’s all you’re looking to support. If you want a more versatile accessory that can charge a tablet, laptop or gaming handheld, consider a brick with a higher capacity — and more ports so you can charge multiple devices simultaneously.

A good wireless charger can lighten your cable load. While wired charging remains faster and more efficient, wireless chargers can clean up your space by eliminating a few of those cables that constantly trip you up.

We recommend thinking about where you'll use a wireless charger before buying one. Those outfitting a home office with new tech may want a wireless charging stand that puts their phone in an upright position that’s easier to see while it’s powering up, while those who want a wireless charger for their nightstand might prefer a lay-flat design or a power station that can charge a smartphone, smartwatch and pair of earbuds all at once.

Plenty of other charging gear is on sale for Prime Day. It’s never a bad idea to pick up a few 30W USB-C adapters so you always have what you need to reliably power up your phone. Same goes for extra USB-C (or USB-A) cables that can live in your car, in your office at work or by the couch.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-best-october-prime-day-deals-on-anker-charging-gear-and-other-accessories-164536998.html?src=rss

This battery-powered Ring doorbell is down to $80 for Prime Day

The Ring Battery Doorbell Plus is on sale for almost half off and is at the lowest price we've ever seen for this model. Normally retailing for $150, the smart doorbell is on sale for $80, a discount of 47 percent. This aggressive sale comes ahead of another Prime Day that runs October 7-8.

The Battery Doorbell Plus offers a 150-by-150-degree "head to toe" field of vision and 1536p high-resolution video. This makes it a lot easier to see boxes dropped off at your front door since it doesn't cut off the bottom of the image like a lot of video doorbells.

This model features motion detection, privacy zones, color night vision and Live View with two-way talk, among other features. Installation is a breeze since you don't have to hardwire it to your existing doorbell wiring. Most users report that the battery lasts between several weeks and several months depending on how users set up the video doorbell, with power-heavy features like motion detection consuming more battery life.

With most video doorbells today, you need a subscription to get the most out of them, and Ring is no exception. Features like package alerts require a Ring Home plan, with tiers ranging from Basic for $5 per month to Premium for $20 per month. You'll also need a plan to store your video event history.

Ring was acquired by Amazon in 2018, and now offers a full suite of home security products including outdoor cameras, home alarm systems and more. This deal is part of a larger sale on Ring and Blink devices leading up to Prime Day.

This article originally appeared on Engadget at https://www.engadget.com/deals/this-battery-powered-ring-doorbell-is-down-to-80-for-prime-day-154508649.html?src=rss

Daniel Ek is stepping down as Spotify CEO

Spotify founder and CEO Daniel Ek will be transitioning to the role of executive chairman on January 1 of next year. The current Co-President and Chief Product and Technology Officer Gustav Söderström and Co-President and Chief Business Officer Alex Norström will take his place as co-CEOs.

“Over the last few years, I’ve turned over a large part of the day-to-day management and strategic direction of Spotify to Alex and Gustav — who have shaped the company from our earliest days and are now more than ready to guide our next phase. This change simply matches titles to how we already operate. In my role as Executive Chairman, I will focus on the long arc of the company and keep the Board and our co-CEOs deeply connected through my engagement," Ek said in a statement.

In a letter to Spotify employees, Ek also shared that he wants to help create more technology-driven "supercompanies" that "tackle some of the biggest challenges of our time."

As a recent example of Ek's other interests, this summer he led a $700 million investment round into the defense tech firm Helsing. The company sells AI-powered software that analyzes weapons and sensor data in battlefields to help with military decision-making. Last year Helsing started manufacturing a line of military drones. Ek has received pushback on this investment in the form of a number of smaller artists, as well as Massive Attack pulling their music catalogs from Spotify.

Daniel Ek founded Spotify in 2006 alongside Martin Lorentzon and oversaw the company's growth to almost 700 million monthly active listeners. It's been a busy year for the music streaming giant, which finally started offering lossless streaming after a multi-year wait.

The company also finds itself at a crossroads as more AI-generated music is making its way to the platform. The company recently made some policy changes to address AI, though this was only aimed at fraudulent and deceptive uses of the technology. Fully AI-generated songs and albums are still permitted.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/daniel-ek-is-stepping-down-as-spotify-ceo-161519791.html?src=rss

Amazon just announced a totally redesigned Echo Show 8 and Echo Show 11

During its invite-only and not live-streamed Devices and Services event today out of New York City, Amazon announced its upcoming Ring and Blink devices, new Fire TVs and streaming sticks, Kindle Scribes and, finally, a whole new Echo device lineup — including new Echo Show smart displays. The latest smart-home screens were built to showcase Alexa+, the new and AI-improved smart assistant.

There's a new Echo Show 8 and Echo Show 11 (as well as new Echo speakers). Amazon called them the most powerful Echo devices ever created. They have custom silicon, the AZ3 Pro with an AI Accelerator, as well as more advanced sensors and improved microphones for better noise cancellation. The look has been completely redesigned, and resembles a cross between the existing Echo Show 8 and the Echo Show 10, with a prominent speaker module at the bottom and floating screen up top. The speakers pack full-range drivers that fire audio forwards for clearer sound. 

Both of the new displays have negative liquid crystal screens designed to maximize viewing angles, so you can see them better from anywhere in a room. They each have 13MP cameras as well, which is the best camera ever included on an Echo Show. That and other sensors, including Wi-Fi radar, will enable contextualized Alexa+ interactions, like recognizing when you walk up to the display, triggering the AI to greet you, display your relevant information and even deliver one of your personalized reminders. 

Software upgrades include a new media control center to allow better access to your video and music and streaming apps. A new home hub has support for Zigbee, Matter and Thread, which should let you hook up even more smart home devices for Alexa to tap into. If you use your display for family scheduling, you can try the new color-coded calendars. If you wear an Oura ring, look for new wellness integrations centered around that fitness tracker. A new Alexa+ shopping widget will give you more control over your Amazon and Whole Foods deliveries while also suggesting items to buy and even gifts to give someone. 

Amazon Echo Show
Amazon/ Sam R for Engadget

The Echo Show 8 and Show 11 were redesigned with Alexa+ in mind, the service that Amazon revealed at an event earlier this year. The AI-enhanced upgrade to Amazon's virtual assistant is supposed to be more conversational, retaining memories of your chats for more contextualized responses. Our experience with an early version of the assistant was… complicated. It was better at many things like multi-step tasks and using information from previous interactions, but it, like all AI-experiences, highlighted the limitations of computers trying to be people. Alexa+ is currently free with Prime, or costs $20 per month for non-Prime members.

Prior to the announcement of the new display, the Echo Show lineup consisted of four models: The Echo Show 5, 8, 15 and 21 (the Echo Show 10 hasn’t been consistently available these past few months). Each model number refers to the size of the screen (measured on the diagonal) and the smallest, the Echo Show 5, is designed for office desks or small kitchens. The older Show 8 was more suited to acting as a smart home hub and, like the Echo Show 5, designed to sit on a table or countertop. Both were last updated in 2023. The Show 15 and 21 are wall-mountable and can act as calendars and family planners in addition to subbing in as small TVs when needed. The two larger Show displays were last updated in 2024.

The new Echo Show devices are available for pre-order today and will come with Alexa+ Early Access. The Echo Show 8 sells for $180 and the and Echo Show 11 for $220. Both will ship on November 12. 

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/amazon-just-announced-a-totally-redesigned-echo-show-8-and-echo-show-11-145937172.html?src=rss

Alexa Home Theater will let you use Amazon's Echo speakers for surround sound

What if you could just use a ton of Echo speakers as a surround sound setup for your TV? That’s basically what Amazon is trying to accomplish with its new Alexa Home Theater feature, which was announced during its 2025 device launch today. As the name implies, Alexa Home Theater works with up to five of its new Echo Studio or Echo Dot Max devices to create a surround sound environment with the Fire TV Stick 4K or 4K Max. According to the company, Alexa will automatically set up the Home Theater feature once you’ve plugged in several Echo devices.

Clearly, Amazon isn’t aiming for the home theater enthusiast crowd here. A surround sound system without a subwoofer simply won’t sound very exciting. But if you’re going for a fairly minimalist setup, I could see how having a few Echo orbs around your living room could be more aesthetically pleasing than giant speakers. Unfortunately, Alexa Home Theater won’t work with the original Echo Studio, Amazon representatives confirmed today.

While the company is pitching this feature as an inexpensive entry into surround sound, a full Alexa Home Theater setup will start at $500 for five Echo Dot Max speakers, and it’ll get even pricier once you throw in the $220 Echo Studio. At that point, just get a decent soundbar, which will be able to virtualize surround sound and offer better low-end.


This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/alexa-home-theater-will-let-you-use-amazons-echo-speakers-for-surround-sound-153325469.html?src=rss

Alexa+ at Amazon's 2025 event: Home Greetings, Fire TV recommendations and more

Before it began slowly trickling out Alexa+ to users at the start of February, Amazon promised a smarter, more conversational assistant. It turned out the reality was more complicated, and, more than anything, Alexa+ in its current state is a showcase of the limitations of generative AI. Of course, that's not stopping Amazon from iterating on the digital assistant. During its "Devices and Services" event on Tuesday, Amazon announced a host of Alexa+ updates, with many of them enabling new integrations alongside the company's brand new devices. 

One of those new features is Alexa+ Greetings, which will roll out to Ring's new 2K and 4K devices this December. With the help of Ring's image recognition technology, Alexa+ will be able to make decisions about how to handle different visitors to your home. If it sees one of your friends or family members, it will know to greet them. Conversely, the digital assistant will ask questions of strangers to determine the purpose of their visit. It can also manage deliveries by providing instructions to couriers about where to leave a package.    

On Amazon's new Fire TVs, Alexa+ powers new, more powerful recommendations and can answer questions about the shows and films you're watching.
Amazon

Alexa+ is also built directly into Vega, Amazon's new smart TV operating system. The integration will translate into more personalized recommendations, and the ability for Alexa+ to show you content related to your questions. For example, you can use the assistant to find a specific scene in a movie. In the demo Amazon showed, Vice President of Fire TV Aidan Marcuss instructed Alexa to "find the scene where Hatteburg hits a home run," and the assistant pulled up the appropriate spot in the film Moneyball. This functionality will be available while watching live sports too, meaning you can ask the assistant for updates on your favorite teams and more. 

Over on the Kindle side of things, you'll be able to send notes and documents you have stored on your Kindle Colorsoft or Scribe 3 to Alexa+. This feature will allow you to have a conversation about the contents of those files with the digital assistant, with the integration slated to arrive sometime early next year.    

A group shot of the 2025 Echo Family
Amazon

Of course, the place you're most likely to interact with Alexa+ is when using an Echo device. Amazon is billing the refreshed 2025 Echo line — made up of Echo Dot Max, Echo Studio, Echo Show 8 and Echo Show 11 — as "designed for Alexa+". To that end, the company has equipped all of the new devices with two new chips, the AZ3 and AZ3 Pro. The silicon is faster and offers better voice processing, with Amazon claiming Alexa+ is over 50 percent better at detecting when you go to wake it. At the same time, there are new third-party integrations, with some notable partners including Fandango, Uber and Lyft. All of these will be found in the new Alexa+ Store where you'll be able to see the assistant's growing list of capabilities. If you would rather use a speaker from a different brand, Amazon said Bose, Sonos and Samsung, among others, are working to bring Alexa+ to their devices. Automakers like BMW are doing the same with their cars.  

Elsewhere, a new Alexa+ shopping widget will allow you to keep track of your Amazon, Whole Foods and Amazon Fresh purchases, including any deliveries you have scheduled. Naturally, Alexa+ can search the entire Amazon catalog and answer questions about any products you might want to buy. 

All of the new Echo devices Amazon announced today will ship with early access to Alexa+ out of the box. You can pre-order all four today, with general availability of the Echo Dot Max and Echo Studio to follow on October 29, while the Echo Show 8 and Echo Show 11 are slated to arrive on November 12. 

This article originally appeared on Engadget at https://www.engadget.com/ai/alexa-at-amazons-2025-event-greetings-and-more-143211291.html?src=rss

The best October Prime Day deals already live: Early tech sales on Amazon devices, Apple, Samsung, Anker and more

October Prime Day will be here soon on October 7 and 8, but as to be expected, you can already find some decent sales available now. Amazon always has lead-up sales in the days and weeks before Prime Day, and it’s wise to shop early if you’re on the hunt for something specific and you see that item at a good discount.

Prime Day deals are typically reserved for subscribers, but there are always a few that anyone can shop. We expect this year to be no exception, and we’re already starting to see that trend in these early Prime Day deals. These are the best Prime Day deals you can get right now ahead of the event, and we’ll update this post with the latest offers as we get closer to October Prime Day proper.

Apple MagSafe charger (25W, 2m) for $35 (29 percent off): The latest version of Apple's MagSafe puck is Qi2.2-certified and supports up to 25W of wireless power when paired with a 30W adapter. The two-meter cable length on this particular model gives you more flexibility on where you can use it: in bed, on the couch, at your desk and elsewhere.

Blink Mini 2 security cameras (two-pack) for $35 (50 percent off): Blink makes some of our favorite security cameras, and the Mini 2 is a great option for indoor monitoring. It can be placed outside with the right weatherproof adapter, but since it needs to be plugged in, we like it for keeping an eye on your pets while you're away and watching over entry ways from the inside.

Shark AI robot vacuum with self-empty base for $230 (58 percent off, Prime exclusive): A version of one of our favorite robot vacuums, this Shark machine has strong suction power and supports home mapping. The Shark mobile app lets you set cleaning schedules, and the self-empty base that it comes with will hold 30 days worth of dust and debris.

Leebein 2025 electric spin scrubber for $40 (43 percent off, Prime exclusive): This is an updated version of my beloved Leebein electric scrubber, which has made cleaning my shower easier than ever before. It comes with seven brush heads so you can use it to clean all kinds of surfaces, and its adjustable arm length makes it easier to clean hard-to-reach spots. It's IPX7 waterproof and recharges via USB-C.

Jisulife Life7 handheld fan for $25 (14 percent off, Prime exclusive): This handy little fan is a must-have if you life in a warm climate or have a tropical vacation planned anytime soon. It can be used as a table or handheld fan and even be worn around the neck so you don't have to hold it at all. Its 5,000 mAh battery allows it to last hours on a single charge, and the small display in the middle of the fan's blades show its remaining battery level.

Apple Mac mini (M4) for $499 ($100 off): If you prefer desktops over laptops, the upgraded M4 Mac mini is one that won’t take up too much space, but will provide a ton of power at the same time. Not only does it come with an M4 chipset, but it also includes 16GB of RAM in the base model, plus front-facing USB-C and headphone ports for easier access.

Apple Watch Series 11 for $389 ($10 off): The latest flagship Apple Watch is our new pick for the best smartwatch you can get, and it's the best all-around Apple Watch, period. It's not too different from the previous model, but Apple promises noticeable gains in battery life, which will be handy for anyone who wants to wear their watch all day and all night to track sleep.

Amazon Smart Plug for $13 ($12 off): We named this the best smart plug for Alexa users because it hooks up painlessly and stays connected reliably. Use it to control lamps or your holiday lights using programs and schedules in the Alexa app, or just your voice by talking to your Echo Dot or other Alexa-enabled listener.

Samsung EVO Select microSD card (256GB) for $23 (15 percent off): This Samsung card has been one of our recommended models for a long time. It's a no-frills microSD card that, while not the fastest, will be perfectly capable in most devices where you're just looking for simple, expanded storage.

Anker Soundcore Select 4 Go speaker for $26 (26 percent off, Prime exclusive): This small Bluetooth speaker gets pretty loud for its size and has decent sound quality. You can pair two together for stereo sound as well, and its IP67-rated design will keep it protected against water and dust.

Anker 622 5K magnetic power bank with stand for $34 (29 percent off, Prime exclusive): This 0.5-inch thick power bank attaches magnetically to iPhones and won't get in your way when you're using your phone. It also has a built-in stand so you can watch videos, make FaceTime calls and more hands-free while your phone is powering up.

JBL Go 4 portable speaker for $40 (20 percent off): The Go 4 is a handy little Bluetooth speaker that you can take anywhere you go thanks to its small, IP67-rated design and built-in carrying loop. It'll get seven hours of playtime on a single charge, and you can pair two together for stereo sound.

Amazon Fire TV Stick 4K Max for $40 (33 percent off): Amazon's most powerful streaming dongle supports 4K HDR content, Dolby Vision and Atmos and Wi-Fi 6E. It also has double the storage of cheaper Fire TV sticks.

Anker Soundcore Space A40 for $45 (44 percent off): Our top pick for the best budget wireless earbuds, the Space A40 have surprisingly good ANC, good sound quality, a comfortable fit and multi-device connectivity.

Amazon Echo Spot for $50 ($30 off): Amazon brought the Echo Spot smart alarm clock back from the dead last year with a new design, improved speakers and added Alexa chops. In addition to being able to control smart home devices and respond to voice commands, the Echo Spot can also act as a Wi-Fi extender for those that have Eero systems.

Anker MagGo 10K power bank (Qi2, 15W) for $63 (22 percent off, Prime exclusive): A 10K power bank like this is ideal if you want to be able to recharge your phone at least once fully and have extra power to spare. This one is also Qi2 compatible, providing up to 15W of power to supported phones.

Levoit Core 200S smart air purifier for $70 ($20 off, Prime exclusive): This compact air purifier cleans the air in rooms up to 140 square feet and uses a 3-in-1 filter that removes microscopic dust, pollen and airborne particles. It has a mobile app that you can use to set runtime schedules, and it works with Alexa and Google Assistant voice commands.

Amazon Fire TV Cube for $100 (29 percent off): Amazon's most powerful streaming device, the Fire TV Cube supports 4K, HDR and Dolby Vision content, Dolby Atmos sound, Wi-Fi 6E and it has a built-in Ethernet port. It has the most internal storage of any Fire TV streaming device, plus it comes with an enhanced Alexa Voice Remote.

Levoit LVAC-300 cordless vacuum for $250 ($100 off, Prime exclusive): One of our favorite cordless vacuums, this Levoit machine has great handling, strong suction power for its price and a premium-feeling design. Its bin isn't too small, it has HEPA filtration and its battery life should be more than enough for you to clean your whole home many times over before it needs a recharge.

Shark Robot Vacuum and Mop Combo for $300 (57 percent off, Prime exclusive): If you're looking for an autonomous dirt-sucker that can also mop, this is a good option. It has a mopping pad and water reservoir built in, and it supports home mapping as well. Its self-emptying base can hold up to 60 days worth of debris, too.

XReal One Pro AR glasses for $649 (16 percent off): The latest from XReal, these smart glasses can let you use almost any device, including your smartphone, with a large virtual display. Their 1080p Micro-OLED screens are bright and sharp, plus they're pretty comfortable to wear.

Nintendo Switch 2 for $449: While not technically a discount, it's worth mentioning that the Switch 2 and the Mario Kart Switch 2 bundle are both available at Amazon now, no invitation required. Amazon only listed the new console for the first time in July after being left out of the initial pre-order/availability window in April. Once it became available, Amazon customers looking to buy the Switch 2 had to sign up to receive an invitation to do so. Now, that extra step has been removed and anyone can purchase the Switch 2 on Amazon.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-best-october-prime-day-deals-already-live-early-tech-sales-on-amazon-devices-apple-samsung-anker-and-more-050801911.html?src=rss

Amazon's redesigned Echo Studio speaker has upgraded drivers and a new chip for Alexa

When it comes to new Echo speakers, Amazon rarely shows something that will appeal to customers who crave premium sound quality. The last time it did, it debuted the Echo Studio, which could handle immersive Dolby Atmos and double as a home theater speaker. At its hardware event in NYC today, Amazon unveiled an updated Echo Studio with new drivers, a new chip and an all-new design. 

Amazon says the new model offers "incredible high fidelity sound" thanks to three full-range drivers and an excursion woofer for maximum bass. Like the original Studio, this speaker is designed for Dolby Atmos content, which is available across both movies and music. There's a new AZ23 Pro chip inside as well, silicon that's built to power audio features and Alexa+ on the new Studio. Amazon says the component offers advanced speech and audio processing — as well as visual processing on the new Echo Show lineup

The company also updated the Echo Studio design, ditching the large cylinder for the more spherical shape. The blueish light ring for Alexa is now on the front instead of the top. The controls are now on the front as well, where you'll find buttons for volume and muting the microphones. Overall, the new Echo Studio is 40 percent smaller than the original and is now covered in a 3D knit fabric for acoustic transparency.

Amazon also announced Alexa Home Theater during its reveal of the Echo Studio. This feature allows you to connect up to five Echo Studio or Echo Dot Max devices to create a more immersive sound setup. The company promises that you'll simply plug in the speakers and Alexa will handle the rest. The assistant will use OmniSense to automatically tune the speakers based on their position, the size of the room and the space's acoustic characteristics. As you might expect, Amazon plans to sell these new speakers in Alexa Home Theater bundles so you don't have to spend too much time shopping for a multi-speaker system. 

The new Echo Studio is available for pre-order today for $220, and early adopters will get Alexa+ Early Access with the purchase. The new speaker will ship on October 29.

This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/amazon-reveals-an-updated-echo-studio-speaker-with-a-new-chip-and-upgraded-drivers-150303898.html?src=rss

Amazon's new Echo Dot Max is a smart speaker built for Alexa+

Amazon has seen the power and potential of Alexa+, its AI-powered smart assistant, and is now launching a raft of devices to support it. Today, at the company's September 2025 devices event, the company unveiled several new Echo devices, with the Dot Max (pictured, right) leading the pack.

The Echo Dot Max is a $100 smart speaker designed to occupy every room in your home, complete with the usual smart home bonuses. The major changes inside and out are to ensure it's better able to use Alexa+, which includes new custom silicon, sensors and better sound. 

For instance, Dot Max features two drivers which, when combined, produces three times as much bass as the fifth-generation Echo Dot. The sound, which will adapt to the local space, is apparently so good that Amazon's Daniel Rausch described it as the "most performant" smart speaker at this sort of price.

Similarly, new microphones are paired with a new, custom-made AZ3 chip for improved conversation detection and background noise filtering. AZ3 can also harness Amazon's "Omnisense" platform that combines Wi-Fi Radar, audio and its accelerometer to monitor what's going on in your home. 

You'll notice the hardware has been redesigned, with the light ring moved to a new control surface on the front of the sphere. 

Amazon's Echo Dot Max, along with the rest of its Echo devices, are available to pre-order today, with buyers getting early access to Alexa+ as part of the deal. Shipping is expected to commence on October 29 for both the Dot Max and its pricier and bigger sibling, the Echo Studio (pictured, left).

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/amazons-new-echo-dot-max-is-a-smart-speaker-for-alexa-150222265.html?src=rss

Ring announces Retinal 4K Vision doorbells and Search Party for finding pets

Today Amazon held its annual Devices and Services event, debuting its next generation of products. While Amazon isn't live streaming the event, we're on the floor sharing everything it announces in our live blog, including a slew of new Ring products and features

Take Ring's Retinal Vision, rolling out across its new line of 2K and 4K doorbells and cameras. It offers back-side illumination sensors and 10x zoom for clear vision, even in low light. Ring pairs these features with custom large aperture lenses that it claims will capture more light and maintain sharpness throughout the frame.

Retinal Vision also "optimizes every step of the imaging process with advanced AI tuning," according to Ring. Basically, it will alter your camera's clarity based on its location. Ring will use AI to sample your camera's quality across up to two weeks, multiple times a day. Then, it will do a "final optimization" that should provide the best video for your location. 

Retinal 4K Vision will be available on the all-new Wired Doorbell Pro, Spotlight Cam Pro, Floodlight Cam Pro and Outdoor Cam Pro. There are also three power over Ethernet devices, Power over Ethernet devices: the Spotlight Cam Pro POE, Outdoor Cam Pro POE and Wired Doorbell Elite. Whereas Retinal 2K Vision is coming to the all-new Indoor Cam Plus and Wired Doorbell Plus. These devices are available to pre-order today. 

Beyond 2K and 4K retinal techology, Ring is also introducing Alexa+ Greetings, an intelligent doorbell attendant. It will basically do the hard work of talking to strangers for you. Alexa+ can ask why someone is at your door, give them instructions and manage your deliveries. 

It works hand-in-hand with another new feature called Familiar Faces. This tool allows Ring to recognize your familiar people and let you know exactly who's at your door. It also lets you limit notifications that come from one of their typical routines, separating them from alerts triggered by an unknown person. 

Alexa+ Greeings and Familiar Faces — both available in December —build on AI-generated descriptions of your alerts, introduced in June. Jamie Siminoff, founder and gave the example text, "A person is walking up the steps with a black dog" and said the descriptions will be "intentionally concise." It lets you know exactly who is coming to the door through text, not just that someone's there. The feature is available for Ring Home Premium subscribers, which costs $20 a month or 200 annually.

Then there's Search Party, which makes your outdoor Ring camera into another lookout for lost pets. If one of your nighbors reports their pet missing in the Ring app, your camera can use AI to identify the animal and send you an alert. However, it wont share any images or videos without your permission. It will start working for lost dogs in November, followed by cats and other pets. 

This article originally appeared on Engadget at https://www.engadget.com/cameras/ring-announces-retinal-4k-vision-and-search-party-for-finding-pets-143314419.html?src=rss

Kindle Scribe Colorsoft hands-on: Vivid and responsive

For the third generation of its Kindle Scribe line of reading-and-writing tablets, Amazon is giving the device a makeover and two new configurations. Since its introduction in 2022, the Scribe hasn’t changed much physically, with the sophomore model mostly getting a new color. This year, Amazon is launching three flavors of the Scribe. At the entry level is a model with a monochrome screen and no front light. Next is the Kindle Scribe 3, a version that has LED front lights but with a black-and-white display. Finally, at the top of the line is the Kindle Scribe Colorsoft — Amazon’s first writing tablet with a color display.

I was able to briefly check out the three new tablets ahead of the company’s launch event, and was quite impressed at the responsiveness and color saturation on the demo units I saw. Also, Amazon hasn’t given these devices a name that indicates what generation they are, simply calling them the “all-new Kindle Scribe lineup” and adding the Colorsoft label to the color model. To make things easier for this article, I’ll be occasionally referring to these as the Kindle Scribe 3.

The first thing I noticed was the Scribe 3’s shape. I’m used to the slightly thicker bezel along one long side of the display that, on the older Scribes, has been a handy place to grip the device without touching the screen. But it wasn’t just there for my thumb to hold onto. That area was also where Amazon placed many of the Kindle’s components like the processor and memory.

To reduce the size of the bezel, Amazon’s Kindle vice president Kevin Keith said “we had to engineer basically the electronics to fold behind the display.” The result is a symmetrical-looking device with a barely-there bezel that’s the same size along all sides of the 11-inch display (slightly bigger than its predecessor’s). It weighs 400 grams (or 0.88 pounds), which should make it easier to hold with one hand while taking notes. Keith also said that, at 5.4mm, the new Kindle Scribe is “thinner than the iPhone Air.” I should point out that a lot of tablets are similarly sleek. The 13-inch iPad Pro and Samsung’s Galaxy Tab S11 Ultra both have barely-there profiles of 5.1mm, while the 11-inch iPad Pro measures 5.3mm.

Side view of the Kindle Scribe 3 held in mid-air.
Cherlynn Low for Engadget

Another way Amazon was able to make the latest Scribe so thin and light was by reducing the number of layers in the display. It removed the anti-glare film on the device, using a glare-free display instead, as well as a textured glass that mimics the friction you’d get when putting pen to paper. The company also got rid of a touch layer that was on top of the display before, since it was able to use a screen with integrated touch input support. Keith said that Amazon also considered the size of the casing around the USB port to aid in shrinking the device further.

On models with front lights (all but the entry-level configuration), Amazon had to use miniaturized LED front lights since there was no longer a chunky bezel to contain them. In addition to making them smaller, the company also doubled the number of bulbs to ensure consistency of lighting across the page.

I couldn’t help reaching for the new Kindle Scribe when I saw it, mostly because it looks a lot different than its predecessor. I already found the original Scribe satisfyingly svelte and this latest model is similarly attractive. I do wonder if I might miss having something to grip onto that isn’t the screen, but that might not be a problem if Amazon’s palm rejection technology is effective.

I did notice a slight dullness in the model without the LED front lights, but it remained as easy to read as an older Kindle. The other two certainly looked a lot brighter, with the higher contrast making onscreen text and drawings look fresher and more vibrant. I’ll get to the Colorsoft model in a bit, but I appreciated how clear and saturated colors appeared on its screen.

One of my favorite updates this year is magnets. Specifically, the magnets holding Amazon’s stylus to the Scribe itself have gotten stronger. Keith said “we added more magnetic force so that it’s harder to fall off,” and when I tried pulling the pen off the tablet it required noticeably more effort than with previous models. It also snapped back on more easily. Considering this was one of my complaints about the older Scribes, I’m very encouraged to see this improvement.

The stylus itself has also been refined, with a slightly thicker, rounded silhouette that Keith said is “a little bit more ergonomic.” It still has a rubberized top that works as a digital eraser and when I used it on the new Scribe I felt the urge to brush off eraser dust, just like I did with the predecessors. The programmable action button remains present as well.

Inside the new Kindle Scribes sit a new custom chip and more memory. Amazon also added the oxide display from its Paperwhite reader, and together with the new processor, that brings a “40 percent faster overall experience with page turning,” according to Keith. The response rate also makes a significant difference in writing, which on the new Kindle Scribe is now down to under 12 milliseconds. That enables a much smoother writing experience with barely noticeable delay between putting the nib on the screen and the digital ink appearing, and because of the changes to the display, any parallax effect is “virtually gone.”

During the few moments I had to scribble on the new Kindle Scribe, I found it hard to tell if there was a big improvement in fluidity or parallax effects compared to the previous models. It’s about as responsive as before, perhaps just a touch faster at showing what I’ve written. Without a side-by-side comparison, it’s not something I can evaluate right now.

I will say that I found the latest Scribe a lot easier to hold with one hand, even in spite of the thinner bezels. That is with the caveat, of course, that I have yet to spend more than a minute writing on it. I usually have a hard time writing on the Scribe without a surface on which to prop it up, so I’m curious to see if it’ll be easier to do so with the newest model.

I was able to get a good idea of how the Kindle Scribe Colorsoft’s color rendering compares to some of its competition, though. Every morning, I write three pages of free-flowing thoughts by hand, and I currently do so on the reMarkable Paper Pro. All my entries include highlighting of the date and time, and my experience with the color rendering on that device has been underwhelming. Technically, I can choose from yellow, green, blue, pink, orange and gray, but honestly I can barely tell the difference between yellow and orange, while blue and gray are also very close. So instead of five usable highlighter shades, I really only have three (gray is barely a hint of a tint).

The Kindle Scribe Colorsoft not only renders colors more vividly, but hues are more distinct from each other. I’d say the reMarkable Paper Pro is like reading a faded newspaper’s comic strip while the Colorsoft looks more like a glossy graphic novel. It’s still a bit muted, but you can at least see variations in shades.

Part of what makes the Colorsoft look nice is the fact that the device itself comes in a nice purplish hue that Amazon calls “fig.” With the selection of colors on its screen, I got a very autumnal vibe and was reminded of berries, for some reason. (It’s also possible I was hungry.)

Like the Kindle Colorsoft that Amazon announced in 2023, the Scribe Colorsoft uses a color filter and LEDs. What’s slightly different is a new rendering engine that Amazon said “enhances the color and ensures writing is fast, fluid and totally natural.”

While the Kindle Scribe Colorsoft has the same dimensions and weight as its monochrome counterpart, it has a slightly slower response rate of 14ms. I have only written on the Colorsoft so far, and will wait till I can spend more time with both tablets to see if this different latency makes a big difference.

The Kindle Scribe Colorsoft in the fig color option, showing Amazon's redesigned Home page on its display. Notes and book covers are rendered in color.
Cherlynn Low for Engadget

In addition to the new hardware, Amazon also updated the Scribe’s software. All Kindles will be getting a redesigned home page that better surfaces your recently added and edited content. Based on what I saw, instead of having rows of covers on the main screen, there is now a Search bar at the very top, followed by an area on the left half below that for “quick notes.” This is basically a notepad for you to continuously update whenever you need it, so you won’t need to create a new notebook every time you want to jot down a thought. To the right of this top half is the “Jump back into” section, which will show things you were recently working on.

Below those two portions is a row of titles called “Recently added,” where things you just downloaded into your library will appear. So if you have been reading, say, The Body Keeps the Score and just bought Katabasis, you’ll find the first one at the top right and the latter in the “Recently added” section.

The search bar at the top is now powered by AI, because there is no escaping that. Thankfully, Amazon has been fairly cautious about its approach, which is particularly important for a product like the Kindle Scribe where people go to read and produce original content. The new AI feature here is a smarter search that not only indexes all your handwritten notes, but understands and groups common topics so you can search for something like “What have I told Panos Panay before?” The Scribe will scan your notebooks, find all your relevant scribblings and present everything you’ve written down across all your files and summarize its findings for you.

I didn’t have time to try this out but I am intrigued at the potential here. I make so many different to-do lists for Engadget’s events coverage that it would be nice to be able to ask “What are the tasks I need to do by the end of October” and possibly get a neatly organized list. The usefulness of this feature depends almost entirely on how intelligent the AI is, so I’ll have to wait till I can review it more thoroughly to say anything more evaluative.

Amazon is also bringing support for Google Drive and OneDrive, so you can create a folder in either service, add documents to them and the system will download them onto your Kindle Scribe. This is just an easier way to get files onto your Kindle, in addition to sending an email to the associated address or finding a way to add them to your Amazon account. OneNote support is coming as well, and it’ll allow you to export your notes as an embedded image or as a converted text document.

A “Send to Alexa+” feature is coming early next year, so you can share your notes or documents from the Kindle Scribe to the assistant. It will be able to pull information from your pages and remember or refer to them in your conversations, so you can ask it about what’s next on your to-do list or what items are already on your shopping note.

One more update on the redesigned home page: Instead of the existing “Notebooks” tab, Amazon is rolling out the “Workspace” section. Keith described this as “essentially like a new folder system.” Functionally, it didn’t appear too different from the Notebooks setup, other than making it easier to group your related documents so you can access, say, all the lists you’ve written up for your wedding planning or writing projects.

The redesigned home page will be launching later this year, and older Kindle devices will be able to update to the new software. The latest generation of Kindle Scribe will be available later this year, with the entry-level model going for $429, the version with the front light costing $499 and the Scribe Colorsoft starting at $629.

This article originally appeared on Engadget at https://www.engadget.com/mobile/tablets/kindle-scribe-colorsoft-hands-on-vivid-and-responsive-145147981.html?src=rss

Amazon adds the Kindle Scribe Colorsoft and Scribe 3 to its writing tablet lineup

Amazon is making two additions to its lineup of writing tablets: the Kindle Scribe 3, and the Kindle Scribe Colorsoft. The company showed off the new ereaders at its fall hardware event in New York. 

The Kindle Scribe Colorsoft is the first time Amazon has added a full-color display to its notebook-like ereader. According to the company, the full-color display is meant to look crisp but "subtle," without the harsh brightness of a typical tablet. The included pen will support writing in 10 different colors and five different shades of highlighter. 

Meanwhile, the new Scribe 3 has been redesigned to be significantly thinner and lighter than its predecessor. At 5.4mm thick, Amazon says it’s meant to have a “paper-like” design. It's also been revamped for a faster writing and page-turning experience. During the event, the company said that writing latency "is down to under 12 milliseconds."

Kindle Scribe Colorsoft
Writing on the Kindle Scribe Colorsoft.
Amazon

The Scribe 3 and Scribe Colorsoft support importing documents from your existing Google Docs and OneDrive accounts. Both tablets come with a new "texture-molded glass" display and a redesigned LED light system at the front of the display. And the display itself has been “rearchitected” to make the writing experience feel more like writing on actual paper than on a tablet. Both Scribes come with an updated processor (Amazon described it as a "quad-core" chip) and increased memory compared with the Scribe 2.

Like last year's Scribe 2, both new models will come with a bunch of AI features, including the ability to generate summaries and search through your notes. Amazon also plans to integrate the Scribe devices into its Alexa+  service so users can ask Alexa questions based on the contents of their notebooks. Amazon is also adding an AI-powered summary feature for ebook readers, called "So Far," that will deliver summaries based on how much of a book you've already read, so you don't have to worry about potential spoilers.

The new Kindle Scribe devices go on sale in the United States “later this year” and will be available in the UK and Germany in early 2026. The third-generation Scribe will start at $500 and the Scribe Colorsoft starts at $630. Amazon will also sell a version of the Scribe 3 without a front light for $430.

This article originally appeared on Engadget at https://www.engadget.com/mobile/tablets/amazon-adds-the-kindle-scribe-colorsoft-and-scribe-3-to-its-writing-tablet-lineup-145031938.html?src=rss

Amazon unveils a new Fire TV lineup, including the $40 Fire TV Stick 4K Select

It's hard to muster much excitement for Amazon's Fire TV hardware these days — the company's main goal has been to offer cheap TVs and set top boxes for mainstream consumers who haven't been swayed by more compelling offerings from Roku, Apple and Google. Apparently, not much is changing this year, judging from everything announced during Amazon’s 2025 device launch event. There’s a new lineup of Fire TV sets, as well as the Fire TV Stick 4K Select, which the company describes as “the fastest streaming stick under $40.”

Once again, the star of Amazon’s TV selection is the Fire TV Omni QLED Series, which starts at $480 for the 50-inch model. The company says these new sets offer 60 percent better brightness, almost double the amount of local dimming zones (which helps with contrast and black levels) and a new processor that’s over 40 percent faster. The Omni series can also automatically adjust their settings to deal with lighting changes in your room.

Amazon Fire TV Stick 4K Select
Amazon Fire TV Stick 4K Select
Amazon

The less exciting Fire TV 2-series and 4-series are 30 percent faster than before, but their main features are their low prices, starting at $160 and $330, respectively. All of Amazon’s new Fire TV sets also feature a new Dialog Boost option, as well as Omnisense, a feature that can automatically turn them on when you walk in the room (something that reeks of an Orwellian panopticon, much like the rest of Amazon’s Echo speakers and cameras).

Naturally, Amazon’s Fire TV devices are ways to lure you into Amazon’s $20-a-month Alexa+ subscription. With Alexa+, you can ask questions about actors on the screen, or have it suggest movies similar to a show you watched over the weekend. It can also direct you straight to content in Netflix and other services. None of that sounds compelling on its own, but if you have a ton of Echo devices, the AI benefits of Alexa+ might be worth the subscription.

As for the Fire TV Stick 4K Select, Amazon confirmed that it’s running its new Linux-based Vega OS, which replaces the old Android Fire TV software. That’s likely one reason why Amazon was able to bring the cost down to $40.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/amazon-unveils-a-new-fire-tv-lineup-including-the-40-fire-tv-stick-4k-select-144541988.html?src=rss

Amazon just revealed new Blink security cameras, including the Outdoor 2K+

Amazon just held a hardware event and introduced some new Blink security camera products. These include updates of the Blink Mini and Blink Outdoor. The Blink Mini 2K+ and the Blink Outdoor 2K+ are brand-new entries in the line-up that both capture 2K video, for added detail. The previous versions were locked at 1080p.

The Blink Outdoor 2K+ features 4x zoom, enhanced low-light performance, two-way talk with noise cancellation and a whole lot more. It can detect both people and vehicles, automatically sending smartphone notifications to Blink Plus subscribers. The battery life is on-point and it includes the company's proprietary Weather Shield.

Amazon is calling the Blink Mini 2K+ its "most advanced plug-in compact camera yet." It can handle 2K video and can also be used outdoors, if you purchase a weather-resistant power adapter. 

A camera.
Amazon

The company also announced something called the Blink Arc, which is another camera primarily intended for outdoor use. This one can capture a panoramic view of a yard with maximum coverage. The Arc is actually two cameras in one, with an AI-enhanced algorithm that fuses the footage together into a single 180-degree panorama.

All of this stuff is available to pre-order right now. The Blink Mini 2K+ costs $50 and the Outdoor 2K+ costs $90. The Blink Arc costs a cool $100. 

This article originally appeared on Engadget at https://www.engadget.com/cameras/amazon-just-revealed-new-blink-security-cameras-including-the-outdoor-2k-144042562.html?src=rss

Amazon's Echo Spot is on sale for $50 ahead of Prime Day

If you’re looking to replace your old alarm clock with an Alexa-powered modern smart alternative then, well, you’re more overwhelmed with options than ever. But a $30 saving on last year’s updated Echo Spot might make your decision a bit easier.

The latest incarnation of the diminutive Spot was introduced in July 2024, and while it’s not quite available for its record low price of $45 right now, $50 is pretty close. For that you get a comfortably bedside-sized device with a sharper display than its predecessor, as well as superior sound. The front face is divided into two halves, with a speaker positioned below the hemispherical display.

What screen you do have is more than enough to display the time and weather information, plus it can show you the song or album title and accompanying artwork when you’re listening to music on those improved speakers. It can naturally be used to boss around your other connected smart devices, too.

Alexa might be baked in, but the Echo Spot is intended to be a fairly bare-bones smart alarm clock, so don’t expect as many features as you’ll find on something like the Echo Show 5. But a lot of people just want a modern alarm clock, and arguably the biggest selling point for the Echo Spot is its total lack of a camera. While that means it can do less than the original 2017 Echo Spot, which Amazon did put a camera in, the decision to remove it from a device that lives right next to your bed was probably for the best.

Amazon’s Prime Day sale returns on October 7, so you can expect a range of deals on its various Echo devices. For our guide to all of the best early deals, head here.

Follow @EngadgetDeals on X for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/amazons-echo-spot-is-on-sale-for-50-ahead-of-prime-day-142550846.html?src=rss

Beats announces the Powerbeats Fit, a slightly updated successor to the Beats Fit Pro

If you’re looking to a new set of Beats earbuds but aren’t a fan of the company’s over-the-ear hook, there’s another fresh option to consider. The Apple-owned company revealed its latest model, the Powerbeats Fit, which looks a lot like the Beats Fit Pro that debuted in 2021. That’s because this new set of earbuds is the direct successor to that four-year-old audio accessory.

Don’t expect a comprehensive overhaul though. In addition to the name change, Beats says it made the Powerbeats Fit more comfortable with a 20 percent more flexible wingtip. The whole package is also more compact thanks to a 17 percent smaller case. The price is staying the same as the previous model though, as the company kept that at $200.

Beats says that the updated wingtip makes the Powerbeats Fit sit as securely in your ears as the Powerbeats Pro’s hook design. The added flex means the earbuds are comfortable enough to be worn all day, according to the company, not just during workouts. The Powerbeats Fit still has an IPX4 rating for water resistance, so sweaty activities shouldn’t be a problem. And to further improve fit, Beats added an extra small ear tip to the three previous sizes (small, medium and large).

Beats' Powerbeats Fit have a familiar design.
Beats' Powerbeats Fit have a familiar design.
Beats

In addition to slightly slimming down the charging case, Beats also added IPX4 moisture protection there. It’s not the most robust coverage, but it’s certainly better than nothing, and it’s enough to withstand water splashes near the pool or in the locker room. There are also new colors for the Powerbeats Fit: orange and pink.

Alongside these modest upgrades, the Powerbeats Fit retains much of what made the Beats Fit Pro a popular choice for ANC (active noise canceling) earbuds. Apple’s H1 chip powers the features once again, including Personalized Spatial Audio with dynamic head tracking, Adaptive EQ, Audio Sharing, hands-free Siri and automatic switching between devices. You’ll also get transparency mode, Find My and FaceTime with Dolby Atmos spatial audio.

The Powerbeats Fit settings are baked into iOS, but Android users will use a dedicated app for customization like other recent Beats devices. Here, you can expect one-touch pairing, customizable controls, battery status, Locate My Beats and an ear tip fit test. All of that is available to iPhone users too, so neither group of OS loyalists will miss out.

New colors for the Powerbeats Fit include pink and orange.
New colors for the Powerbeats Fit include pink and orange.
Beats

Onboard controls offer quick access to playback changes, volume adjustments, listening modes, calls and voice assistants. These are still physical buttons, which can be an important consideration over the tappable, touch-based panels that some of the competition employs. Battery life is also consistent with the Beats Fit Pro: up to seven hours on the earbuds and up to 30 hours total with the charging case with ANC off. Turn noise cancellation on and you can expect six hours of use (24 hours with the case).

The Powerbeats Fit is available for preorder today in black, gray, orange and pink color options for $200. The earbuds will hit retail shelves on October 2.

This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/beats-announces-the-powerbeats-fit-a-slightly-updated-successor-to-the-beats-fit-pro-140000905.html?src=rss

This Prime Day deal gets you two Blink Mini 2 cameras for only $35

Amazon's Prime Big Deal Days are coming up, and you can get a jump on things today. A mainstay of Prime Day sales, a pair of Blink Mini 2 cameras is on sale for only $35. That's 50 percent off, a record low and less than what you'd usually pay for one. It's also Engadget's pick for the best budget security camera.

This is the newest (2024) model of Blink's budget wired model. The camera is well-suited for nighttime video: It has a built-in LED spotlight, color night vision and a low-light sensor. Day or night, it records in sharp 1080p resolution. It also has a wider field of view than its predecessor.

The Blink Mini 2 is primarily designed for indoor use. But you can use it outdoors, too. You'll just need to fork over $10 for a weather-resistant adapter. Wherever you use the camera, it works with Alexa and supports two-way audio. ("Hello, doggy, I'll be home soon.")

It also supports person detection. (That's a neat feature that differentiates between people and other types of movement.) However, the feature requires a Blink Subscription Plan. They start at $3 per month or $30 per year for one device.

The camera is available in black or white. Both colors are available for the $35 Prime Day deal, but they can't be mixed unless you buy each separately. It's worth noting that this deal is open to anyone — no Prime subscription necessary. You can also save on a bunch of other Blink (and Ring) security gear. The Blink Outdoor 4 cameras are some of our favorites, and most configurations are on sale for Prime Day, including bundles like this three-camera system that's 61 percent off.

This article originally appeared on Engadget at https://www.engadget.com/deals/this-prime-day-deal-gets-you-two-blink-mini-2-cameras-for-only-35-201049416.html?src=rss

The Apple Watch Series 11 is already on sale

The Apple Watch Series 11 is already available on Amazon, and you can pick up select color and case combos for $10 less than Apple's base price. The newest generation of Apple's smartwatch was just revealed this month at the company's iPhone 17 event in Cupertino.

The Series 11 packs some new features like 5G connectivity on cellular models, a more scratch-resistant screen, new sleep features, improved battery life and a hypertension alert system that just received FDA clearance. The GPS-only version is our top pick for Best Apple Watch in 2025.

In our hands-on review, we gave the Apple Watch Series 11 a score of 90 out of 100, noting its thin and light design, the excellent battery life, a nifty new wrist-flick gesture and its comprehensive approach to health and fitness monitoring. It is relatively pricey however, and the Watch SE 3 is probably enough for most users, but the Series 11 has a brighter and larger display, a thinner design, longer battery life and more advanced health features.

For anyone who hasn't bought a new Apple Watch in a few years, the Series 11 is a worthy upgrade. If you're in the market for your first Apple Watch, then this model would be a great one to start with. If you're rocking a Series 10, then you probably don't need to upgrade now unless the improved battery life will mean that much to you.

The Apple Watch Series 11 is available on Amazon in all sizes, colors and connectivity options. There are a few case color and band combinations that are $10 off Apple's base price.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-apple-watch-series-11-is-already-on-sale-135020671.html?src=rss

Prime Day laptop deals: Save on some of our favorite machines from Apple, Dell, Lenovo, HP and others

If your laptop simply isn’t cutting it anymore, October Prime Day might have arrived just in time. As has been the case for the past few years, laptop deals are abundant for Amazon's Big Deal Days, bringing discounts to MacBooks, Windows laptops, Chromebooks and more. But we wouldn’t blame you if you didn’t know how to figure out if that laptop you’re eyeing actually has a good discount for Prime Day, or if the deal is stale.

That’s where Engadget can help. We’ve poured over the Prime Day laptop deals available this year to pick out the best ones you can get across all kinds of computers. If you’re super picky about the specs you want in a new laptop, we always recommend going straight to the manufacturer so you can configure the machine exactly to your needs. But if you’re willing to work with premade models, October Prime Day deals could help you save some cash on your next laptop.

Apple’s latest laptops are the MacBook Air M4 and the MacBook Pro M4, and we recommend getting those if you want a device that’s as future-proof as possible at the moment. You’ll find decent MacBook deals on Amazon throughout the year, and most of them will be on the base configurations. In a welcomed update earlier this year, Apple recently made all base models of the MacBook Air M4 have 16GB of RAM by default (which is the same as you’ll find on the base-level Pros).

You’ve got a lot of variety to choose from when it comes to Windows laptops, and that can be a blessing or a curse. We recommend looking for a laptop from a reputable brand (i.e. Microsoft, Dell, Acer, Lenovo and others like them), and one that can handle daily work or play pressures. That means at least 16GB of RAM and 245GB of SSD storage, plus the latest Intel or AMD CPUs. If you’re looking for a new gaming laptop, you’ll need a bit more power and a dedicated graphics card to boot.

Most Chromebooks are already pretty cheap, but that just means you can get them for even less during an event like Prime Day. However, there are a ton of premium Chromebooks available today that didn’t exist even three years ago, so now is a great time to look out for discounts on those models. In general, we recommend looking for at least 4 to 8GB of RAM and at least 128GB of SDD storage in a Chromebook that you plan on using as your daily driver.

This article originally appeared on Engadget at https://www.engadget.com/deals/prime-day-laptop-deals-save-on-some-of-our-favorite-machines-from-apple-dell-lenovo-hp-and-others-130507439.html?src=rss

Amazon's Smart Plug is cheaper than ever for Prime Day

There are few things as simple yet exceedingly annoying as having to get up and turn off a light. Whether you're already comfortable in bed or live with mobility limitations, smart plugs can be a great option — especially when they're on sale.

Right now, you can pick up an Amazon Smart Plug for a record-low price of $13, down from $25. The 48 percent discount comes as part of early Amazon Prime Day sales, ahead of the main event next week. You can also pick up a two-pack of Amazon's Smart Plugs for $24, down from $50 — a 52 percent discount. 

The Amazon Smart Plug is our pick for best smart plug if you have an Alexa-enabled home. You can tell Alexa to turn off the lights or control it with the Alexa app. It's compatible with most plugged in devices, from lamps and fans to even kitchen appliances. You can also set it to turn on lights or devices at a certain time each day.

This article originally appeared on Engadget at https://www.engadget.com/deals/amazons-smart-plug-is-cheaper-than-ever-for-prime-day-130446173.html?src=rss

DoorDash introduces a cute delivery robot named Dot

At the Dash Forward keynote, DoorDash has unveiled a cute electric delivery robot named Dot that was designed specifically for quick neighborhood trips. Dot is around one-tenth the size of a car, can move up to 20 mph and can navigate not just roads, but also bike lanes and sidewalks. It's small enough to fit through doorways and driveways and can help local businesses meet demand from people who prefer to shop from the comfort of their own homes. The robot was developed in-house by DoorDash Labs to integrate with the company's new Autonomous Delivery Platform, an AI dispatcher that matches orders with the best delivery method. 

"You don’t always need a full-sized car to deliver a tube of toothpaste or pack of diapers," said Stanley Tang, the head of DoorDash Labs. "That’s the insight behind Dot." To start with, the company is launching an early access program for Dot in Tempe and Mesa, Arizona. DoorDash said it's the beginning of Dot's commercial deployment and that the robot will make its way to new markets in the future. 

The company has assured Dashers the Dot will not replace them. Human dashers will still handle the "vast majority" of its deliveries, and Dot is supposed to allow them to pick up the more high value ones while it fills in the gaps for local trips that don't pay as much. DoorDash said that it needed to conjure up more "innovative ways to keep pace with demand" to support more local businesses and as it expands to new regions. It also previously teamed up with Coco Robotics to offer sidewalk robot deliveries in LA and Chicago, and it offers drone deliveries in Christiansburg, Virginia and Frisco, Texas.

This article originally appeared on Engadget at https://www.engadget.com/apps/doordash-introduces-a-cute-delivery-robot-named-dot-130036016.html?src=rss

Google's AI Mode gets better at understanding visual prompts

Since it began rolling out AI Mode at the start of March, Google has been slowly adding features to its dedicated search chatbot. Today, the company is releasing an update it hopes will make the tool more useful for visual searches.

If you've tried to use AI Mode since Google made it available to everyone in the US, you may have noticed it responds to questions about images with a lot of text. Robby Stein, vice president of product for Google Search, admits it can be "silly" to see text in that context, and so the company has been working on applying AI Mode's "query fan-out" technique to images. Now, when you prompt AI Mode to find you images of "moody but maximalist" bedrooms for instance, it's better equipped to respond to that request, with an algorithm that will run multiple searches in the background to get a better understanding of exactly what it is you're looking to find.

Google has built this feature to be multimodal, meaning you can start a conversation with an image or video. And as you can probably guess, Google believes these capabilities will be particularly useful in a shopping context. You could use AI Mode to shop before today, but Google argues the experience benefits greatly from the more visual responses the chatbot is able to generate. What's more, it's better able to make sense of tricky queries like "find me barrel jeans that aren't too baggy." Once AI Mode generates an initial response, you can ask follow-up questions to refine your search.

As with any Google update, it may take a few days for the company to roll out the updates it announced today to everyone. So be patient if you don't see the new, more visual experience right away.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-ai-mode-gets-better-at-understanding-visual-prompts-130001201.html?src=rss

Prime Day deals include up to 58 percent off Shark robot vacuums

With fall Prime Day around the corner, we're already starting to see solid deals on tech we love. Case in point: Shark robot vacuums. Shark makes some of our favorite robovacs and a few of them are already discounted for Prime members ahead of the sale. The Shark AV2501S AI Ultra robot vacuum is one of them, with a whopping 58-percent discount that brings it down to $230. This discount marks a record low for this model.

Shark offers several variations of its AI Ultra robot vacuums. There are small variations between them, and a different model is our pick for the best robot vacuum for most people. In general, you can expect solid cleaning performance from these devices, along with accurate home mapping and an easy-to-use app.

The model that's on sale here is said to run for up to 120 minutes on a single charge, which should be enough to clean an entire floor in a typical home. The self-emptying, bagless vacuum can store up to 30 days worth of dirt and debris in its base. Shark says it can capture 99.97 percent of dust and allergens with the help of HEPA filtration.

If you'd rather plump for a model that's able to mop your floors too, you're in luck: a Shark Matrix Plus 2-in-1 vacuum is on sale as well. At $300 for Prime members, this vacuum is available for $400 (or 57 percent) off the list price. Its mopping function can scrub hard floors 100 times per minute. You can also trigger the Matrix Mop function in the app for a deeper clean. This delivers 50 percent better stain cleaning in targeted zones, according to Shark.

This article originally appeared on Engadget at https://www.engadget.com/deals/prime-day-deals-include-up-to-58-percent-off-shark-robot-vacuums-171836574.html?src=rss

Opera's AI browser will cost you $20 a month

Would you pay $20 for an AI-powered browser? Opera is betting on it with the release of its $19.90 (per month) "next generation AI browser," Opera Neon, meant for people who use AI every day. The Norwegian company first announced Neon in May and has now launched it to a limited number of users. 

According to Opera, "it's a browser built to not only let you browse the web, but to also use agentic AI to act for you and with you as you browse and work on complex projects. Opera Neon moves beyond a simple AI chat to execute tasks, create code, and deliver outcomes directly within the browser experience."  

Opera Neon includes features such as Tasks, which acts as your own workspaces to use AI for things like comparing and analyzing sources. There's also Cards, which is made up of reusable AI prompts, versus having to rewrite the same prompt over and over again. You can make your own prompts or pull them from the community's collection. 

Then there's Neon Do, which works with a Task to navigate the web for checking sources, looking at information, completing forms and more. 

Opera is hoping that this "premium, subscription-based browser" will entice users enough to pay $19.90 per month, rather than use free options such as Google's Gemini-powered Chrome features. You can join the waitlist to try it yourself, with Opera claiming more spots will become available soon. 

This article originally appeared on Engadget at https://www.engadget.com/ai/operas-ai-browser-will-cost-you-20-a-month-123022110.html?src=rss

Pick up Apple's 25W MagSafe charger while it's on sale for $35 ahead of Prime Day

On the heels of the iPhone 17 lineup being released a few weeks ago, you can pick up Apple's 25W MagSafe charger for a song. The two-meter version of the more powerful charging cable has dropped by 29 percent from $49 to $35. That's a record-low price.

As it happens, that actually makes the two-meter version of the cable less expensive than the one-meter variant. The shorter cable will run you $39 as things stand.

If you have an iPhone 16, iPhone 17 or iPhone Air, this cable can charge your device at 25W as long as it's connected to a 30W power adapter on the other end. While you'll need a more recent iPhone to get the fastest MagSafe charging speeds, the charger can wirelessly top up the battery of any iPhone from the last eight years (iPhone 8 and later). With older iPhones, the charging speed tops out at 15W. The cable works with AirPods wireless charging cases too — it's certified for Qi2.2 and Qi charging.

The MagSafe charger is one of our favorite iPhone accessories, and would pair quite nicely with your new iPhone if you're picking up one of the latest models. If you're on the fence about that, be sure to check out our reviews of the iPhone 17, iPhone Pro/Pro Max and iPhone Air.

This article originally appeared on Engadget at https://www.engadget.com/deals/pick-up-apples-25w-magsafe-charger-while-its-on-sale-for-35-ahead-of-prime-day-143415264.html?src=rss

How to buy (and try) the Meta Ray-Ban Display glasses

Meta's Ray-Ban Display glasses are now on sale, but actually buying a pair will be a bit more complicated than ordering a pair of Meta's other smart glasses. That's because Meta isn't allowing online sales of its display glasses. Instead, they are only available by reservation at a handful of physical retail stores.

For now, the $799 Meta Ray-Ban Display glasses are available at select Ray-Ban, Sunglass Hut, LensCrafters and Best Buy locations in the United States. Verizon will also start carrying the glasses sometime "soon," according to Meta. The company will also allow people to demo and buy a pair at its own Meta Lab locations. These include the Burlingame, California space that opened as the "Meta Store" in 2022, as well as pop-ups in Las Vegas, Los Angeles and New York opening in the coming weeks. 

In order to actually get your hands on a pair, though, you'll need to book an appointment for a demo at one of these stores through Meta's website. According to Meta, this is "to make sure customers get the glasses and band that’s perfect for them." (In my own experience with both the Meta Ray-Ban Display glasses and the Orion prototype, the neural wristband requires a snug fit to function properly.) An appointment will also give shoppers the opportunity to order prescription lenses for the glasses. The glasses only support a prescription range of -4.00 to +4.00, though, so they won't be compatible with all prescriptions.

The company recently said it's seen "strong" demand for demos and it looks like most locations are already booked out for several weeks, judging by Meta's scheduling website. It will also be difficult if you don't live near a major city. For example, Sunglass Hut's website currently lists just seven locations where demos will be available. 

The good news is that Meta does plan to eventually increase availability. The company has said the Meta Ray-Ban Display glasses will be available in Canada, France, Italy and the UK beginning in "early 2026" and that it expects buying options will "expand" the longer they're on sale. 

Sales of the glasses, which are Meta's first to incorporate a heads-up display, will be closely watched. At $799, the glasses are significantly more expensive than the rest of the frames in Meta's expanding lineup of "AI glasses." But, as I wrote after my recent demo at Meta Connect, the display also enables wearers to do much more than what's currently possible with the existing Ray-Ban or Oakley models.

This article originally appeared on Engadget at https://www.engadget.com/how-to-buy-and-try-the-meta-ray-ban-display-glasses-121500138.html?src=rss

Early October Prime Day 2025 tech deals under $50: Save on gear from Apple, Anker, Ring, JBL and Roku

The event hasn't officially begun, but we've already found some of the best Prime Day deals under $50. The October Prime Day sale, or Prime Big Deal Days as Amazon calls it, is a great time to stock up on smaller tech like bluetooth trackers, mini speakers, earbuds, mice, power banks, wall chargers, and more. Everything here is pulled from our own guides and reviews — products and brands we’ve tried ourselves and currently recommend. If you want to snap up a whole bunch of new tech without spending too much, this list of the best Prime Day deals under $50 is a great place to start.

Amazon Fire TV Stick 4K Max for $40 ($20 off): Amazon's most powerful streaming dongle supports 4K HDR content, Dolby Vision and Atmos and Wi-Fi 6E with double the storage of cheaper Fire TV sticks. It earned an honorable mention in our guide to streaming devices and also happens to make a good retro gaming emulator.

Ring Battery Doorbell for $50 ($50 off): At $49.99 this juuust qualifies as an under $50 tech deal. If you don’t have doorbell wires at your front entrance, you can still have a camera to capture all the package deliveries and neighborhood animal sightings with the Ring Battery Doorbell. It records video in HD with more vertical coverage than the last model, so you can see people from head to toe.

Blink Mini 2 security cameras (two-pack) for $35 ($35 off): This is the top budget pick in our guide to the best security cameras. The Mini 2 is a great option for indoor monitoring or you can put it outside with a weatherproof adapter, but since it needs to be plugged in, we like it for keeping an eye on your pets while you're away and watching over entry ways from the inside.

Anker 622 5K magnetic power bank with stand for $34 ($14 off with Prime): This 0.5-inch thick power bank attaches magnetically to iPhones and won't get in your way when you're using your phone. It also has a built-in stand so you can watch videos, make FaceTime calls and more hands-free while your phone is powering up.

Amazon Smart Plug for $13 ($12 off): We named this the best smart plug for Alexa users because it hooks up painlessly and stays connected reliably. Use it to control lamps or your holiday lights using programs and schedules in the Alexa app, or just your voice by talking to your Echo Dot or other Alexa-enabled listener.

Levoit Mini Core-P air purifier for $40 ($10 off with Prime): This is the mini version of the top pick in our guide to air purifiers. It has a three-stage filter (pre, activated carbon and particle filters) though that particle filter is not a true HEPA filter. But it’s rated at 250 square feet and can help clear the air in your office or other small room.

Echo Pop smart speaker for $25 ($15 off): The half sphere Pop is the most affordable Echo speaker in Amazon’s lineup. The sound won’t be as full as its larger siblings, but will do a fine job of bringing Alexa’s help to smaller rooms. Just note that it went as low as $18 for Black Friday and October Prime Day last year.

Roku Streaming Stick Plus 2025 for $29 ($11 off): This is our top pick for the best streaming device for accessing free and live content. The dongle supports 4K video and HDR and doesn’t need to be plugged into the wall for power. It’s a great way to access any streaming service you could ask for: Netflix, Prime Video, Disney+, HBO Max and many more.

Leebein 2025 electric spin scrubber for $40 ($30 off with Prime): This is an updated version of the electric scrubber we love that makes shower cleaning easier than ever before. It comes with seven brush heads so you can use it to clean all kinds of surfaces, and its adjustable arm length makes it easier to clean hard-to-reach spots. It's IPX7 waterproof and recharges via USB-C.

Jisulife Life7 handheld fan for $25 ($4 off with Prime): This handy little fan is a must-have if you live in a warm climate or have a tropical vacation planned anytime soon. It can be used as a table or handheld fan and even be worn around the neck so you don't have to hold it at all. Its 5,000 mAh battery allows it to last hours on a single charge, and the small display in the middle of the fan's blades shows its remaining battery level.

Anker Soundcore Select 4 Go speaker for $26 ($9 off with Prime): This is one of our top picks for Bluetooth speaker. It gets pretty loud for its size and has decent sound quality. You can pair two together for stereo sound as well, and its IP67-rated design will keep it protected against water and dust.

Amazon Echo Spot for $50 ($30 off): Amazon brought the Echo Spot smart alarm clock back from the dead last year with a new design and improved speakers. In addition to being able to control smart home devices and respond to voice commands, the Echo Spot can also act as a Wi-Fi extender for those that have Eero systems. It went as low as $45 for Black Friday last year.

Samsung EVO Select microSD card (256GB) for $23 ($4 off): This Samsung card has been one of our recommended models for a long time. It's a no-frills microSD card that, while not the fastest, will be perfectly capable in most devices where you're just looking for simple, expanded storage.

JBL Go 4 portable speaker for $40 (20 percent off): The Go 4 is a handy little Bluetooth speaker that you can take anywhere you go thanks to its small, IP67-rated design and built-in carrying loop. It'll get seven hours of playtime on a single charge, and you can pair two together for stereo sound. The previous model, JBL Go 3 is on sale for $30.

Anker Soundcore Space A40 for $45 (44 percent off): Our top pick for the best budget wireless earbuds, the Space A40 have surprisingly good ANC, good sound quality, a comfortable fit and multi-device connectivity.

Blink Outdoor 4 security camera for $35 ($45 off): We named this the best choice for Alexa users in our guide to security cameras. It works seamlessly with Alexa devices like the Echo speakers and Show displays. Plus it can run for up to two years on a set of AA batteries and we found the motion detection to be spot on.

This article originally appeared on Engadget at https://www.engadget.com/deals/early-october-prime-day-2025-tech-deals-under-50-save-on-gear-from-apple-anker-ring-jbl-and-roku-120531892.html?src=rss

Nothing spin-off CMF announces $100 Headphone Pro

Smartphone company Nothing now has quite the line of audio accessories and now counts a new adaptive ANC (active noise cancellation) over-the-ear headphone from its sub-brand CMF. The Headphone Pro offers remarkable specs for less than $100, with features like 40dB of noise cancellation, LDAC (Lossless Digital Audio Codec) and Hi-Res certification for both wired and wireless audio, along with an "Energy Slider" to adjust EQ.

The CMF Headphone Pro doesn't at all resemble Nothing's boxy over-the-ear Headphone 1 cans. While that design was rather eccentric and austere, CMF's model has a softer, more conventional look with a rounded headband reminiscent of Sony's WH-1000XM5s. Another prominent feature is the large, interchangeable ear cups that appear to have generous padding.

CMF's new cans come with adaptive ANC that reduces outside sounds by 40dB or up to 99 percent and automatically adjust the level according to outside noise. Though battery life is a generous 100 hours with ANC disabled, it gets cut in half to 50 hours with ANC turned on. That's still more than Sony's new WH-1000XM6, even without ANC enabled. You can get an additional four hours of service with just a five minute charge, and the Headphone Pro can be directly charged by some smartphones via a USB-C cable. 

Control-wise, the Headphone Pro is nicely analog, with buttons instead of the touch controls found on other headphones. Those include Bluetooth/power button on one side and an action button on the other that's customizable via Nothing's X app. There's a multifunction rocker for volume, playback and control of ANC/ambient sound. Then there's the Energy Slider that lets you make treble and bass adjustments without the need to dive into the X app's EQ settings. 

Nothing's CMF sub-brand will soon spin off into its own budget brand, the company announced recently. That doesn't seem to have happened yet, but you can now order the CMF Headphone Pro for just $84 in light grey, dark grey and light green, with shipping set for October 6. The company will soon offer interchangeable ear cushions as well in orange or light green for $25 a pair. 

This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/nothing-spin-off-cmf-announces-100-headphone-pro-120002029.html?src=rss

The Morning After: What to expect from Amazon’s big devices event today

The fall tech events just won’t stop. Today, Amazon has its fall hardware event, which is likely to reveal improvements to voice assistant Alexa and some new Echo homes for it to live inside. It’s been a couple years since the Echo Show got an update, and it’s been even longer for the standard Echo.

The invitation suggests we’re expecting some Kindle upgrades too — the image on the invitation is a Kindle with a color illustration. The Kindle Scribe 2 came out earlier this year as did the Kindle Colorsoft, so maybe there’s something in the works that combines the best features of both.

While Alexa and Kindle will be the main draws, Amazon's other tech brands, such as Ring and Eero, may also be present. In short, it's likely to be a busy event.

It all kicks off at 10AM ET in New York City, where we’ll be reporting live. Stay tuned for all the announcements on our Amazon devices liveblog. There’s no video livestream, so we’ll be updating from the event like it’s 2010.

— Mat Smith

Get Engadget's newsletter delivered direct to your inbox. Subscribe right here!


Electronic Arts has agreed to a $55 billion acquisition that will take the company private. Saudi Arabia Public Investment Fund (PIF), Silver Lake and Affinity Partners have reached a deal to buy EA and its collection of sports game franchises and, er, other games that have recently struggled. This year, the company canceled an upcoming Black Panther game and closed the studio behind it, and has reportedly “shelved” its Need For Speed Franchise. Then there was Anthem. The deal, the largest-ever leveraged buyout, marks the end of EA’s 35-year run as a publicly traded company.\

Continue reading.


The Federal Communications Commission (FCC) recently published a 163-page pdf showing the electrical schematics for the iPhone 16e, despite Apple specifically requesting they be confidential. This was most likely a mistake on the part of the FCC, according to a report by AppleInsider.

The files included block diagrams, electrical schematic diagrams, antenna locations and more. Competitors could simply buy a handset and open it up to access this information, as the iPhone 16e was released back in February, but this leak would eliminate any guesswork. The FCC hasn’t addressed how this leak happened.

Continue reading.

TMA
Sony

Sony has been marking the 30th anniversary of PlayStation by selling stuff. Things like PS5 consoles and accessories styled after the PS1. (I just got the controller. Brag.) The company is also publishing a photography book showcasing “never-before-seen prototypes, concept sketches and design models that shaped hardware development” from the early days through to the current PS5 era. Sony has also teamed up with Reebok for a collection of 30th anniversary sneakers styled after the PS1.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-what-to-expect-from-amazons-big-devices-event-2025-113059183.html?src=rss

OpenAI will let you buy things from Etsy within ChatGPT

You'll now be able to buy some items you're looking for without leaving your ChatGPT conversation. OpenAI has launched a new feature called Instant Checkout, which is powered by Agentic Commerce Protocol, a technology it developed with Stripe. When you search for items to buy through ChatGPT, you'll be able to see which ones you can buy from within the chatbot among the products it shows you. The feature is available for both free and paid users, but it only supports single-item purchases from Etsy sellers in the US at the moment. 

OpenAI says over a million sellers that use Shopify, including Glossier, SKIMS and Spanx, will support the Instant Checkout "soon." It's also adding multi-item cart checkout and is expanding the feature's reach to more regions in the future. The company is open sourcing Agentic Commerce Protocol to allow more merchants to work on their ChatGPT integrations.

In its post, OpenAI said that it will continue ranking the product results most relevant to your search query based on availability, price and quality. It will not give products that support Instant Checkout a boost and will not rank them higher than other options just because of the feature. Your orders and payments will still be handled by the merchant you're buying from, and you can either use your card on file with OpenAI or other available payment options. The company also said that it's the merchants who'll be paying a "small fee on completed purchases," and that Instant Checkout will not affect product prices for you.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-let-you-buy-things-from-etsy-within-chatgpt-110032055.html?src=rss

The best Nintendo Switch 2 games for 2025

The Nintendo Switch 2 didn’t come out of the gate with a host of exclusive, must-play games. But we’re a few months into the console’s lifecycle now and there are a variety of Switch 2-only games that are worth your cash, as well as a bunch of original Switch games that have received improvements for the new console And there’s also a robust selection of third-party games that have been on other consoles for a while, but not available on the Switch.

Between all those, there are plenty of good games for the Switch 2 — and if you don’t have an original Switch, there’s even more out there. You can see our list of our favorite Switch games here, but this list will focus on Switch 2 exclusives, original Switch games that have been improved for the new hardware and the best-performing, third-party titles worth your time. And keep an eye on this list, as there should be a lot more Switch 2 exclusives coming this fall that we're excited to try, including eagerly-awaited titles like Metroid Prime 4.

Check out our entire Best Games series including the best Nintendo Switch games, the best PS5 games, the best Xbox games, the best PC games and the best free games you can play today.

This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/best-nintendo-switch-2-games-070007467.html?src=rss


How technocracy made us doubt progress

The 20th century left us with fatalism, defeatism, and a hollowed-out vision of the future. Techno-humanism can restore our belief in progress.


How To Build AI Red Teams That Actually Work

Generative AI is everywhere. It’s in your customer support workflows, embedded in your analytics dashboards, and quietly powering your internal tools. But while the business rushes to deploy, security teams are left trying to secure systems they didn’t design, didn’t know about, and can’t easily test. That’s where AI red teaming comes in. AI red […]

AI: Our Modern-Day Cyrano

Edmond Rostand’s play, “Cyrano de Bergerac,” centers on a self-conscious poet named Cyrano who assists his handsome but less articulate friend Christian in winning the heart of Roxane, a woman whom they both love. As we all know, by presenting Cyrano’s eloquent words as his own, Christian ultimately wins Roxane’s affection. Today, AI plays a similar […]


WhatsApp adds support for Live Photos on iOS, Motion Photos and Document scanning on Android

WhatsApp has announced a couple of new features for Android and iOS.  You can now send Live Photos on iOS, and Motion Photos on Android. You may be aware that WhatsApp already […]

Thank you for being a Ghacks reader. The post WhatsApp adds support for Live Photos on iOS, Motion Photos and Document scanning on Android appeared first on gHacks Technology News.

Ask Brave: new AI-powered search feature launches on Brave Search

Are AI powered search engines the future of search? Classic search engines such as Google Search or Microsoft Bing are getting AI-feature infusions to compete with new AI-based search engines. Brave Search, one […]

Thank you for being a Ghacks reader. The post Ask Brave: new AI-powered search feature launches on Brave Search appeared first on gHacks Technology News.

Opera Neon AI agentic browser released in early access

In May 2025, Opera announced its agentic browser called Opera Neon AI. Now, it is available in early access. Let's see what the fuss is all about. Note: This is just an […]

Thank you for being a Ghacks reader. The post Opera Neon AI agentic browser released in early access appeared first on gHacks Technology News.


Jared Leto’s Weird-Ass ‘Tron: Ares’ Set Behavior Was Maybe His Least Weird-Ass Set Behavior

Jared Leto Tron Ares

At least he didn't try to get inside a computer or anything.

The Moment the ‘KPop Demon Hunters’ Crew Knew They Had a Hit in ‘Golden’

Kpop Demon Hunters Plane Logo Netflix

There's much more than just an apparently fateful trip to the dentist in the creation of the Netflix hit's breakout song.

OpenAI Officially Launches Video Generator Sora 2, Now With Social Feed

A video of Sam Altman generated by OpenAI's Sora 2

Slop Watch is in full effect.

Ford CEO Predicts Trump’s EV Policies Could Cut Demand in Half

F 150 Lightning And Mustang Mach E

Ford CEO Jim Farley's comments came on the day the EV tax credit ended in the US.

How Found Footage Helped Blumhouse Build Its Horror Empire

Sinister Still3 2

An excerpt from 'Horror's New Wave: 15 Years of Blumhouse' digs into 'Sinister,' a key early success for the studio.

Amazon Echo Dot Max and Studio Hands-On: Do You Need an Army of Echo Speakers?

Amazon Fire Echo Ring Scribe Device Launch 20

Amazon's Alexa Home Theater feature invites you to buy five Echo Studios and use them as a home theater system.

The Best Gadgets of September 2025

Gizmodo Bestgadgets Sept2025

Between IFA, Apple, and Meta Connect, September was... a gadget lover's dream.

Elon Musk’s Wikipedia Competitor Is Going to Be a Disaster

Elon Musk attends the memorial service for political activist Charlie Kirk at State Farm Stadium on September 21, 2025 in Glendale, Arizona.

Remember when Grok praised Adolf Hitler?

‘Frankenstein’ (the Book) Gets a Special Edition Ahead of ‘Frankenstein’ (the Movie)

Frankenstein Cover

Jacob Elordi's monster, star of Guillermo del Toro's upcoming Netflix release, adorns a new cover for Mary Shelley's 1818 Gothic classic.

Democrats Spooked by Trump’s Plan to Hand Over Weapons-Grade Plutonium to Private Firms

Plutonium Pellet

The move goes against decades of the United States' nonproliferation policy, lawmakers argue.

Kindle Scribe Colorsoft Hands-On: Notetakers Are Going to Love This

Amazon Fire Echo Ring Scribe Device Launch 31

But maybe not the price.

Dyson Isn’t Doing So Great

New,york,,new,york,,usa, ,june,16,,2024:,dyson

The company's annual profits were nearly slashed in half despite record sales.

What Does James Mangold’s New Paramount Deal Mean for ‘Star Wars’ and ‘Swamp Thing’?

James Mangold Harrison Ford Indiana Jones

The 'Logan' and 'Indiana Jones' director just signed an overall deal with the studio, which may worry genre fans.

This Is How a Venus Flytrap Knows It’s Time to Snap Shut

Closeup,venus,flytrap,,insectivorous,plants,,low,giant,,dionaea,muscipula,,needle Like Teeth

These all-natural traps appear to run on calcium.

Cannabis Can Help Relieve Chronic Low Back Pain, Major Trial Finds

Cannabisplant

Vertanical has already filed for approval of its cannabis-based drug, named VER-01, in Europe.

Silicon Valley’s Obsession With Fertility Has Spawned ‘Sperm Races’

A promotional image from Sperm Racing showing two competitors in front of an audience

Can y'all be normal about anything?

Hot Toys Is Making a Figure of the Best ‘Terminator’ Endoskeleton

Terminator Endoskeleton Hot Toys Top

The 'Terminator 2: Judgment Day' endoskeleton is coming to your shelf, and it's been through hell.

‘Marvel Rivals’ Is Adding Daredevil and ‘Marvel Zombies’ Skins Next Month

Daredevil Marvel Rivals Release Date

NetEase's Marvel hero shooter gets the Devil of Hell's Kitchen and some timely tie-in skins right in time for spooky season.

OpenAI Says ChatGPT Can Already Do Some Work Tasks as Well as Humans

OpenAI logo on a laptop

OpenAI’s latest study argues that today’s top models already rival humans on real-world tasks, though it swears they won’t fully replace us.

How a Looming Government Shutdown Could Disrupt Transportation and Cybersecurity

Airplane,during,take,off,on,airport,runway,at,night,against

Federal workers across agencies are facing massive layoffs as Democrats and Republicans are in a stand off over funds.


Beyond RL: A New Paradigm for Agent Optimization

Subscribe • Previous Issues A Better Way to Build and Refine Agents Modern AI applications have evolved far beyond single models. Many systems orchestrate multiple specialized agents — planners that decompose tasks, extractors that gather data, generators that create content — all coordinating through external tools and APIs. This architectural shift creates a fundamental optimization problem: theContinue reading "Beyond RL: A New Paradigm for Agent Optimization"

The post Beyond RL: A New Paradigm for Agent Optimization appeared first on Gradient Flow.


First Text, Then Music - Now, AI Comes for 3D

Artificial intelligence has brought us back to the world where startup pitch events feature companies that leave you re-thinking what’s possible.

Read All

How Progress Ends - A Review: Revisiting Innovation, Institutions, and Everything in Between

Frey’s How Progress Ends argues that cycles of centralization and decentralization shape technological progress. I argue that his framework overlooks labor power, romanticizes “disruption,” and ignores how technology could enable worker-controlled production. True innovation may require decentralization plus public provision, not just market competition.

Read All

The HackerNoon Newsletter: Can We Terraform Our Way Out of Earth? (9/30/2025)

9/30/2025: Top 5 stories on the HackerNoon homepage!

Read All

Why 0G Foundation Appointed Dr. Jonathan Chang to Lead Its Decentralized AI Push

0G Foundation has appointed Dr. Jonathan Chang, former CEO of Heritage Singapore, to its board of directors to advance decentralized AI adoption globally. Chang brings experience from fintech, education, and cultural sectors, along with connections to policymakers and academic institutions. His role focuses on positioning decentralized AI as a public good rather than a corporate-controlled technology.

Read All


Building Stronger Support Systems for Rural Healthcare

There are nearly 2,000 rural community hospitals, which make up just a bit over a third of all hospitals in the U.S., according to the American Hospital Association. These hospitals are serving millions of Americans who are in critical need of healthcare, which is why the risk of more closures is a major concern as it further limits care access for vulnerable communities. The financial health of many rural hospitals is largely precarious because they operate on thin margins, have limited revenue sources and rely heavily on government funding. And with recent legislative changes, including…

Connected Workstations: Transforming Fleet Management and Patient Care

Healthcare systems are under constant pressure to find ways to stretch IT resources, ensure staff have essential tools and balance cost control with the demand for advanced technology. For many organizations, one asset is critical: mobile workstations. As hospitals expand across multiple sites and often rely on hundreds of these carts daily, keeping them functional, accessible and secure becomes a massive challenge. That’s where connected workstations with integrated software are changing the game. By offering real-time visibility, proactive maintenance and streamlined workflows, these…


Perplexity Launches Search API to Power Next-Gen AI Applications

Perplexity has introduced the Search API, opening up access to the same infrastructure that underpins its public answer engine. With coverage of hundreds of billions of webpages and infrastructure tuned for AI-heavy workloads, the new API is aimed at developers who want real-time, reliable search results for building their own agents, applications, and retrieval-augmented pipelines.

By Robert Krzaczyński

DeepMind Release Gemini Robotics-ER 1.5 for Embodied Reasoning

Google DeepMind introduced Gemini Robotics-ER 1.5, a new embodied reasoning model for robotic applications. The model is available in preview through Google AI Studio and the Gemini API.

By Daniel Dominguez


ChatGPT: The Agentic App

ChatGPT's long awaited move into user monetization.


OpenAI Launches Sora 2 With TikTok-style App

OpenAI on Tuesday released Sora 2, its most advanced video generation model yet, alongside a TikTok-style social app that will let users insert themselves into AI-created scenes through a feature called "cameos."

Creality Falcon A1 Pro Review: Comprehensive Testing of the Dual-Module Laser

Read the Creality Falcon A1 Pro Review with insights on speed, accuracy, and usability to learn why this compact laser engraver could be a top choice for makers.

Liverpool Lose To Galatasaray In Champions League, Chelsea Beat Mourinho's Benfica

A Victor Osimhen penalty gave Galatasaray victory over Liverpool in the Champions League on Tuesday, while Chelsea edged out Jose Mourinho's Benfica and Kylian Mbappe hit a hat-trick for Real Madrid in Kazakhstan.

Stars Align For Louis Vuitton, Stella McCartney At Paris Fashion Week

Louis Vuitton showcased a collection of flouncy skirts and sculptural ruffles in front of a star-packed audience while Stella McCartney had war and peace on her mind for her show on day two of Paris Fashion Week on Tuesday.

The Three Countries Closest To Hamas Are Reportedly Urging Its Leaders To Accept Trump's Gaza Deal

Qatar, Egypt and Turkey, the three countries closest to Hamas, are reportedly urging the group's leaders to accept Donald Trump's Gaza deal despite being infuriated by last-minute changes made by Israeli Prime Minister Benjamin Netanyahu

Antarctic Sea Ice Hits Its Third-lowest Winter Peak On Record

Antarctica's winter sea ice has hit its third-lowest peak in nearly half a century of satellite monitoring, researchers said Tuesday, highlighting the growing influence of climate change on the planet's southern pole.

After 13 Years, Is a Spice Girls Reunion in The Works? Here's What We Know

Spice Girls plan a 2026 world tour, with Victoria Beckham hinting at involvement while others push for a major comeback.

From Trump Tower to EA Tower: Kushner's Plan to Dominate Video Games

Electronic Arts is set for a record $55bn buyout led by Saudi Arabia's PIF, Silver Lake, and Jared Kushner's Affinity Partners, sparking scrutiny over foreign influence in gaming.

YouTube To Pay $24.5 Million Settlement for Donald Trump Lawsuit Over President's Account Suspension

YouTube agreed to pay $24.5 million as a settlement for a lawsuit by United States President Donald Trump over his 2021 account suspension.

Taylor Swift Reportedly Rejected Super Bowl LX Halftime Offer Before Bad Bunny Announcement

Taylor Swift reportedly turned down the 2026 Super Bowl halftime show before the NFL named Bad Bunny as the headliner. Here's what sources say and how Bad Bunny spent the weekend before the big reveal.

OpenAI Rolls Out New Parental Controls for ChatGPT Following Teen's Suicide

OpenAI launched a new set of parental controls for ChatGPT following a lawsuit over a teenager's suicide.

Children Worldwide Lose 8.45 Million Days of Healthy Life Yearly Due To Second-Hand Smoking, Research Claims

New research found that second-hand smoke affects children's health worldwide, causing them to lose 8.45 million days of healthy life every year.

Trump Hints At 'Land' Strikes On Venezuelan Cartels: 'Going To Look Very Seriously'

President Donald Trump hinted at potential strikes on "land" against Venezuelan cartels as his administration continues to intensify pressure on the country

Nebraska Gov. Jim Pillen Opts State Into Federal School Tax Credit Program Under 'One Big Beautiful Bill'

Nebraska opts into a new federal school tax credit program under the government's One Big Beautiful Bill.

Elvis Presley Allegedly Hired a Hitman to Murder Priscilla's Karate Instructor Lover

A shocking claim in Priscilla Presley's new memoir alleges Elvis once considered hiring a hitman to kill her karate instructor lover Mike Stone, RadarOnline reports.

Trump Says He Taunted Putin About Inability To Win Ukraine War: 'Are You a Paper Tiger?'

President Donald Trump said he taunted Russian counterpart Vladimir Putin over his inability to win the war in Ukraine after more than three years

Wedbush's Dan Ives Sets the Highest Target Price for Tesla After Musk Buys $1B of the EV Stock

Wedbush raised its price target for the Tesla stock to $600 per share, citing the company's accelerated path to AI revolution and unparalleled footprint in the autonomous and robotics segments.

Mag 7 Remains Top Picks for Hedge Funds as Money Managers Poured $185B Into These Stocks in Q2

US megacap tech stocks, which are driving the AI boom, remain the top picks for hedge funds, reaffirming their forecasts of long-term growth potential.

Who is Beth Bourne? Moms for Liberty Chairperson Strips To Protect Young Girls From Transgender Locker Room Policy

Moms for Liberty chair Beth Bourne sparked fury at a California school board meeting after stripping to a bikini in protest against transgender locker room policies.

Donald Trump Slammed After Sharing Chuck Schumer Deepfake Racist Video: 'Bigotry Will Get You Nowhere,' Says Jeffries

Donald Trump has triggered outrage after sharing a racist AI-generated video of Democratic leaders Chuck Schumer and Hakeem Jeffries, deepening tensions ahead of a government shutdown deadline.

What is the McDonald's Monopoly Scandal? Here's What Happened As Controversial Game Returns

McDonald's Monopoly is returning to the US after nearly a decade, but the comeback revives memories of the multimillion-dollar fraud scandal that rocked the promotion in the 1990s and early 2000s.

350K Michigan Workers Ordered to Repay Covid Jobless Benefits After $2.7B 'Error' — Are You One of Them?

The Unemployment Insurance Agency in Michigan will resume recouping overpaid benefits, and those who miss out on repaying could face financial penalties and interest on the amount owed.

From ITT to Teledyne: UBS Names 'Hidden Gem' Stocks Set to Explode This Earnings Season

UBS believes the double-digit valuation discount for small- to mid-cap industrials compared with the large-cap multis position several companies within SMID to outperform moving forward.

Trump Turns Uncle Sam Into a Shareholder: Inside His Shock $11B Intel Stake and the Companies Next on His List

The Trump Administration has been aggressively pursuing equity stakes in major US private firms since July.

ICE Arrests Migrant Accused Of Being a Sex Offender In New Mexico

A migrant accused of being a sex offender was detained by Immigration and Customs Enforcement agents in New Mexico, the agency said in a statement


Why I Quit My 6 Figure Side Hustle for a Full-Time Data Science Job

Here's why you should not quit your full-time data science job for high-paying side hustles.

Exploring Metaclasses in Python: Unleashing the Power of Class Creation

Understand how useful in working with the metaclasses in Python.


Spotify founder Daniel Ek steps down as CEO

无摘要

Who’s winning — and who’s losing — in Europe’s neobank race?

无摘要

Revolut CEO backs French startup challenging Facebook Ads in $50m round

无摘要

How to fund more female founders, according to an EIC board member

无摘要


The OnePlus Pad 2 remains one of the best Android tablets out there -- and it's on sale

For a limited time, get $100 off the OnePlus Pad 2 and receive a protective case.

Verizon will give you a free Nintendo Switch right now - here's how to get yours

Right now at Verizon, when you sign up for a 1- or 2-gigabyte Fios internet plan, you'll get a Nintendo Switch console for free.

Walmart CEO expects AI will 'change literally every job' - not just engineering

Workers all over the economy will experience AI-related shifts, Walmart CEO Doug McMillon said.

Opera agentic browser Neon starts rolling out to users - how to join the waitlist

The company details more of its AI browser's features and reveals the subscription pricing.

I went hands-on with Amazon's newest Kindle models, and they've never felt so premium

The two new hybrid note-taker and e-reader devices are made to mimic writing on paper. But you'll have to pay to play.

OpenAI's Sora 2 launches with insanely realistic video and an iPhone app

The app has customizable algorithms and privacy protections for your likeness. Here's how it works.

This hidden Pixel camera feature makes my photos absolutely pop - how to enable it

Are your Pixel photos a little dull? Try this simple tweak to make them look more vibrant and colorful.

Amazon event 2025 live: Reactions to Echo Dot Max, Ring, Fire TV, Kindle Scribe Colorsoft, more

New Echo devices, Fire TVs, and a color Kindle Scribe. These are only a few of the products Amazon announced at its Devices and Services event today.

43% of workers say they've shared sensitive info with AI - including financial and client data

AI use is surging, but cybersecurity training isn't keeping up, a new study finds.

4 better ways to protect your business than dreaded (and useless) anti-phishing training

In fact, the longer a security training campaign continues, the more likely that employees will fail the test. Here's what to do instead.

I traveled with the Sony XM6 headphones for a week - and can't go back

The Sony WH-1000XM6 are staying in my backpack for the next few trips, at least.

NordVPN's Meshnet 'not going anywhere' after all - thanks to customer revolt

In a rare example of a tech company listening to its users, NordVPN opts to open-source the service instead.

My favorite Apple Watch Ultra 3 feature is one I hope to never use

The Apple Watch Ultra 3 supports satellite connectivity, letting users share their location and send texts miles away from network coverage.

Popular Neon app that pays users to share call recordings remains down for now - here's why

The service has been taken down, but the developer promises a relaunch in another one to two weeks.

Microsoft lets you pick a character for your AI - with its new Copilot Portraits feature

Available as a Copilot Labs experience, Copilot Portraits supplements the voice with animated 2D images that speak with you in real time.

Watch out, shoppers: You can't hide your Amazon orders anymore - but there's a workaround

Be careful what you order, as Amazon no longer lets you hide, delete, or archive orders from your purchase history.

Beats just gave an old favorite a huge makeover - and a new Powerbeats name

The new Powerbeats Fit pack Spatial Audio, Apple's H1 chip, and more for $200.

Apple's iOS 26.0.1 fixes a bevy of glitches - update your iPhone now

The latest update resolves flaws in several features and patches a known security vulnerability.

The best value flagship phone of 2025 is getting a sequel - but without an iconic partnership

The OnePlus 15 will launch globally after the Asia-only release of the OnePlus 13T.

Your Whoop app isn't just for fitness anymore - you can order blood tests through it now

A clinician will review your blood test results and provide feedback. Here's how it works.


The OnePlus Pad 2 remains one of the best Android tablets out there -- and it's on sale

For a limited time, get $100 off the OnePlus Pad 2 and receive a protective case.

Verizon will give you a free Nintendo Switch right now - here's how to get yours

Right now at Verizon, when you sign up for a 1- or 2-gigabyte Fios internet plan, you'll get a Nintendo Switch console for free.

Walmart CEO expects AI will 'change literally every job' - not just engineering

Workers all over the economy will experience AI-related shifts, Walmart CEO Doug McMillon said.

Opera agentic browser Neon starts rolling out to users - how to join the waitlist

The company details more of its AI browser's features and reveals the subscription pricing.

I went hands-on with Amazon's newest Kindle models, and they've never felt so premium

The two new hybrid note-taker and e-reader devices are made to mimic writing on paper. But you'll have to pay to play.

OpenAI's Sora 2 launches with insanely realistic video and an iPhone app

The app has customizable algorithms and privacy protections for your likeness. Here's how it works.

This hidden Pixel camera feature makes my photos absolutely pop - how to enable it

Are your Pixel photos a little dull? Try this simple tweak to make them look more vibrant and colorful.

Amazon event 2025 live: Reactions to Echo Dot Max, Ring, Fire TV, Kindle Scribe Colorsoft, more

New Echo devices, Fire TVs, and a color Kindle Scribe. These are only a few of the products Amazon announced at its Devices and Services event today.

43% of workers say they've shared sensitive info with AI - including financial and client data

AI use is surging, but cybersecurity training isn't keeping up, a new study finds.

4 better ways to protect your business than dreaded (and useless) anti-phishing training

In fact, the longer a security training campaign continues, the more likely that employees will fail the test. Here's what to do instead.

I traveled with the Sony XM6 headphones for a week - and can't go back

The Sony WH-1000XM6 are staying in my backpack for the next few trips, at least.

NordVPN's Meshnet 'not going anywhere' after all - thanks to customer revolt

In a rare example of a tech company listening to its users, NordVPN opts to open-source the service instead.

My favorite Apple Watch Ultra 3 feature is one I hope to never use

The Apple Watch Ultra 3 supports satellite connectivity, letting users share their location and send texts miles away from network coverage.

Popular Neon app that pays users to share call recordings remains down for now - here's why

The service has been taken down, but the developer promises a relaunch in another one to two weeks.

Microsoft lets you pick a character for your AI - with its new Copilot Portraits feature

Available as a Copilot Labs experience, Copilot Portraits supplements the voice with animated 2D images that speak with you in real time.

Watch out, shoppers: You can't hide your Amazon orders anymore - but there's a workaround

Be careful what you order, as Amazon no longer lets you hide, delete, or archive orders from your purchase history.

Beats just gave an old favorite a huge makeover - and a new Powerbeats name

The new Powerbeats Fit pack Spatial Audio, Apple's H1 chip, and more for $200.

Apple's iOS 26.0.1 fixes a bevy of glitches - update your iPhone now

The latest update resolves flaws in several features and patches a known security vulnerability.

The best value flagship phone of 2025 is getting a sequel - but without an iconic partnership

The OnePlus 15 will launch globally after the Asia-only release of the OnePlus 13T.

Your Whoop app isn't just for fitness anymore - you can order blood tests through it now

A clinician will review your blood test results and provide feedback. Here's how it works.


OpenAI Launches Sora 2 and a Consent-Gated Sora iOS App

OpenAI released Sora 2, a text-to-video-and-audio model focused on physical plausibility, multi-shot controllability, and synchronized dialogue/SFX. The OpenAI team has also launched a new invite-only Sora iOS app (U.S. and Canada first) that enables social creation, remixing, and consent-controlled “cameos” for inserting a verified likeness into generated scenes. Model capabilities Sora 2 claims materially better […]

The post OpenAI Launches Sora 2 and a Consent-Gated Sora iOS App appeared first on MarkTechPost.

Delinea Released an MCP Server to Put Guardrails Around AI Agents Credential Access

Delinea released an Model Context Protocol (MCP) server that let AI-agent access to credentials stored in Delinea Secret Server and the Delinea Platform. The server applies identity checks and policy rules on every call, aiming to keep long-lived secrets out of agent memory while retaining full auditability What’s new for me? The GitHub project DelineaXPM/delinea-mcp […]

The post Delinea Released an MCP Server to Put Guardrails Around AI Agents Credential Access appeared first on MarkTechPost.

DeepSeek V3.2-Exp Cuts Long-Context Costs with DeepSeek Sparse Attention (DSA) While Maintaining Benchmark Parity

DeepSeek released DeepSeek-V3.2-Exp, an “intermediate” update to V3.1 that adds DeepSeek Sparse Attention (DSA)—a trainable sparsification path aimed at long-context efficiency. DeepSeek also reduced API prices by 50%+, consistent with the stated efficiency gains. DeepSeek-V3.2-Exp keeps the V3/V3.1 stack (MoE + MLA) and inserts a two-stage attention path: (i) a lightweight “indexer” that scores context […]

The post DeepSeek V3.2-Exp Cuts Long-Context Costs with DeepSeek Sparse Attention (DSA) While Maintaining Benchmark Parity appeared first on MarkTechPost.

A Coding Guide to Build a Hierarchical Supervisor Agent Framework with CrewAI and Google Gemini for Coordinated Multi-Agent Workflows

In this tutorial, we walk you through the design and implementation of an advanced Supervisor Agent Framework using CrewAI with Google Gemini model. We set up specialized agents, including researchers, analysts, writers, and reviewers, and bring them under a supervisor agent who coordinates and monitors their work. By combining structured task configurations, hierarchical workflows, and […]

The post A Coding Guide to Build a Hierarchical Supervisor Agent Framework with CrewAI and Google Gemini for Coordinated Multi-Agent Workflows appeared first on MarkTechPost.


Designing CPUs for next-generation supercomputing

In Seattle, a meteorologist analyzes dynamic atmospheric models to predict the next major storm system. In Stuttgart, an automotive engineer examines crash-test simulations for vehicle safety certification. And in Singapore, a financial analyst simulates portfolio stress tests to hedge against global economic shocks.  Each of these professionals—and the consumers, commuters, and investors who depend on their insights— relies on a…

Powering HPC with next-generation CPUs

For all the excitement around GPUs—the workhorses of today’s AI revolution—the central processing unit (CPU) remains the backbone of high-performance computing (HPC). CPUs still handle 80% to 90% of HPC workloads globally, powering everything from climate modeling to semiconductor design. Far from being eclipsed, they’re evolving in ways that make them more competitive, flexible, and…

The Download: our thawing permafrost, and a drone-filled future

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Scientists can see Earth’s permafrost thawing from space Something is rotten in the city of Nunapitchuk. In recent years, sewage has leached into the earth. The ground can feel squishy, sodden. This small…

The US may be heading toward a drone-filled future

On Thursday, I published a story about the police-tech giant Flock Safety selling its drones to the private sector to track shoplifters. Keith Kauffman, a former police chief who now leads Flock’s drone efforts, described the ideal scenario: A security team at a Home Depot, say, launches a drone from the roof that follows shoplifting…

Scientists can see Earth’s permafrost thawing from space

Something is rotten in the city of Nunapitchuk. In recent years, a crack has formed in the middle of a house. Sewage has leached into the earth. Soil has eroded around buildings, leaving them perched atop precarious lumps of dirt. There are eternal puddles. And mold. The ground can feel squishy, sodden.  This small town…

Delivering a digital sixth sense with next-generation networks


Semiconductor neuron mimics brain's memory and adaptive response abilities

The human brain does more than simply regulate synapses that exchange signals; individual neurons also process information through intrinsic plasticity, the adaptive ability to become more sensitive or less sensitive depending on context. Existing artificial intelligence semiconductors, however, have struggled to mimic this flexibility of the brain.

OpenAI's ChatGPT now lets users buy from Etsy, Shopify in push for chatbot shopping

OpenAI is turning ChatGPT into a virtual merchant that can help sell goods for Etsy and Shopify as the artificial intelligence company looks for new revenue in online commerce.

AI could automate up to 26% of tasks in art, design, entertainment and the media

Artificial intelligence is transforming the creative process. AI can not only generate complex texts, high-quality images, and videos in just a few minutes. It can also support creative thinking and act as a research tool in the stages prior to artistic production. However, its adoption also raises ethical concerns around issues such as originality, authorship, ownership, and potential job displacement.

3D printing becomes stronger and more economical with light and AI

Photocurable 3D printing, widely used for everything from dental treatments to complex prototype manufacturing, is fast and precise but has the limitation of being fragile and easily broken by impact. A KAIST research team has developed a new technology to overcome this weakness, paving the way for the more robust and economical production of everything from medical implants to precision machine parts.

Artificial intelligence may not be artificial

The term artificial intelligence renders the sense that what computers do is either inferior to or at least apart from human intelligence. AI researcher Blaise Agüera y Arcas argues that may not be the case.

AI tool helps researchers treat child epilepsy

An artificial intelligence tool that can detect tiny, hard-to-spot brain malformations in children with epilepsy could help patients access life-changing surgery quicker, Australian researchers said on Wednesday.

How safe is your face? The pros and cons of having facial recognition everywhere

Walk into a shop, board a plane, log into your bank, or scroll through your social media feed, and chances are you might be asked to scan your face. Facial recognition and other kinds of face-based biometric technology are becoming an increasingly common form of identification.

California enacts AI safety law targeting tech giants

California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday.

Anthropic launches new AI model, touting coding supremacy

US startup Anthropic on Monday announced the launch of its new generative artificial intelligence model, Claude Sonnet 4.5, which it says is the world's best for computer programming.

Creator says AI actress is 'piece of art' after backlash

The creator of an AI actress who exploded across the internet over the weekend has insisted she is an artwork, after a fierce backlash from the creative community.


Semiconductor neuron mimics brain's memory and adaptive response abilities

The human brain does more than simply regulate synapses that exchange signals; individual neurons also process information through intrinsic plasticity, the adaptive ability to become more sensitive or less sensitive depending on context. Existing artificial intelligence semiconductors, however, have struggled to mimic this flexibility of the brain.

OpenAI's ChatGPT now lets users buy from Etsy, Shopify in push for chatbot shopping

OpenAI is turning ChatGPT into a virtual merchant that can help sell goods for Etsy and Shopify as the artificial intelligence company looks for new revenue in online commerce.

AI could automate up to 26% of tasks in art, design, entertainment and the media

Artificial intelligence is transforming the creative process. AI can not only generate complex texts, high-quality images, and videos in just a few minutes. It can also support creative thinking and act as a research tool in the stages prior to artistic production. However, its adoption also raises ethical concerns around issues such as originality, authorship, ownership, and potential job displacement.

3D printing becomes stronger and more economical with light and AI

Photocurable 3D printing, widely used for everything from dental treatments to complex prototype manufacturing, is fast and precise but has the limitation of being fragile and easily broken by impact. A KAIST research team has developed a new technology to overcome this weakness, paving the way for the more robust and economical production of everything from medical implants to precision machine parts.

Artificial intelligence may not be artificial

The term artificial intelligence renders the sense that what computers do is either inferior to or at least apart from human intelligence. AI researcher Blaise Agüera y Arcas argues that may not be the case.

AI tool helps researchers treat child epilepsy

An artificial intelligence tool that can detect tiny, hard-to-spot brain malformations in children with epilepsy could help patients access life-changing surgery quicker, Australian researchers said on Wednesday.

How safe is your face? The pros and cons of having facial recognition everywhere

Walk into a shop, board a plane, log into your bank, or scroll through your social media feed, and chances are you might be asked to scan your face. Facial recognition and other kinds of face-based biometric technology are becoming an increasingly common form of identification.

California enacts AI safety law targeting tech giants

California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday.

Anthropic launches new AI model, touting coding supremacy

US startup Anthropic on Monday announced the launch of its new generative artificial intelligence model, Claude Sonnet 4.5, which it says is the world's best for computer programming.

Creator says AI actress is 'piece of art' after backlash

The creator of an AI actress who exploded across the internet over the weekend has insisted she is an artwork, after a fierce backlash from the creative community.


Advancing Anomaly Detection for Industry Applications with NVIDIA NV-Tesseract-AD

An image of an industrial setting.In a recent blog post, we introduced NVIDIA NV-Tesseract, a family of models designed to unify anomaly detection, classification, and forecasting within a...

How id Software Used Neural Rendering and Path Tracing in DOOM: The Dark Ages

DOOM: The Dark Ages pushes real-time graphics to new limits by integrating RTX neural rendering and path tracing, setting a new standard for how modern games...


Top A.I. Researchers Leave OpenAI, Google and Meta for New Start-Up

Founded by a co-creator of ChatGPT, Periodic Labs aims to build artificial intelligence that can accelerate discoveries in physics, chemistry and other fields.

What We Know About ChatGPT’s New Parental Controls

OpenAI said parents can set time and content limits on accounts and receive notifications if ChatGPT detects signs of potential self-harm.

YouTube Settles Trump Lawsuit Over Account Suspension for $24.5 Million

Mr. Trump had sued Alphabet, the parent of YouTube and Google, and other social media companies after the platforms suspended his accounts following the Jan. 6, 2021, riot at the Capitol.

California’s Gavin Newsom Signs Major AI Safety Law

Gavin Newsom signed a major safety law on artificial intelligence, creating one of the strongest sets of rules about the technology in the nation.


Kodacolor 100 is a New Film From Eastman Kodak Arriving This Week

A yellow and black Kodak Kodacolor 100 color negative film box, labeled for 35mm film with 36 exposures. The box features red, white, and black accents and Kodak branding on a white background.

Kodak has quietly launched a new color 35mm film, Kodacolor 100. It joins existing Kodak film, including Kodak Ektar 100, Gold 200, ColorPlus 200, Portra 400, UltraMax 400, and Portra 800.

[Read More]

Leica Achieved Record Sales for the Fourth Consecutive Year

A large metallic globe sculpture stands on a plaza in front of a modern, circular building with a Leica logo and many windows under a clear blue sky.

Leica has announced that it has achieved record sales for the fourth consecutive financial year, with the highest revenue in its history along with continued profitability.

[Read More]

The Nikon ZR Can Record 6K 59.94p for More Than 2 Hours Despite No Active Cooling

A diagram of a Nikon camera with a semi-transparent view showing airflow around an internal component, highlighted in bright orange and yellow, with blue arrows indicating heat dissipation.

Nikon says that, based on internal testing, it expects the Nikon ZR to be able to record continuously for up to 2 hours and 5 minutes in 6K resolution at 59.94 frames per second in R3D NE at an ambient temperature of 25 degrees Celcius, despite the fact it is a very small camera with no active cooling.

[Read More]

See (and Hear) a Camera Get Smashed by a Foul Ball in the First MLB Playoff Game of 2025

Split image: Left side shows a shattered TV screen with a baseball lodged in it behind protective netting. Right side shows a baseball player hitting a ball during a game, with the ball circled in red as it flies toward the TV.

Just an hour after the first pitch of the first MLB playoff game of the season, a camera behind home plate was victimized by a glancing foul ball, loudly erupting into a shower of glass.

[Read More]

DJI: ‘DJI Is Not Controlled by the Government and Has No Ties to the Military’

A gray camera drone with four propellers rests on a moss-covered tree stump in a forest, with blurred green foliage in the background.

In response to a court ruling last week, DJI has published a statement on its blog reiterating that it is not controlled by the Chinese government and has no ties to the nation's military. It also argues that despite the ruling, the court actually largely agrees with them on this stance.

[Read More]

25 of the Best Analog Photos of 2025

A collage of three images: legs emerging from a calm lake, an empty modern subway platform with two people, and a snowy mountain peak above the clouds.

The Analog Sparks 2025 International Film Photography Awards celebrate analog photography as a medium and elevate the best film photographers worldwide.

[Read More]

You Can Now Edit Your Videos in Premiere on iPhone for Free

A woman with long dark hair smiles at the camera, surrounded by a bright indoor setting with clothes hanging in the background. Overlaid are video editing icons and a timeline, suggesting she is recording or editing a video.

Adobe announced that its Premiere video editing software was coming to iPhone earlier this month. It is now officially available in the App Store, and is entirely free to download and use.

[Read More]

The Analogue aF-1 is a New Point-and-Shoot Film Camera With Autofocus

A black analogue film camera with a large lens in the center, a flash in the upper right corner, and the word "analogue" printed in white on the left side. The camera is shown against a plain gray background.

The aF-1 is a brand new film camera design from Analogue, a design agency based in Amsterdam, The Netherlands. Aiming for release in early 2026, the aF-1 promises to be an accessible, affordable point-and-shoot camera with autofocus and automatic film winding, made from scratch.

[Read More]

The 2025 Astrophotography Prize Explores the Wonders of Deep Space

A small, lit house sits on a barren landscape under an arched, star-filled Milky Way. To the right is a colorful, red and blue nebula set against a dark space background.

The 2025 Astrophotography Prize has revealed its field of winners, a competition dedicated to the "education and the continual improvement of astrophotography."

[Read More]

The Untold Story of the Famous Marilyn Monroe Skirt Blowing Photo

Two black-and-white photos of a woman in a white halter dress standing over a subway grate as her skirt is blown upward, while a man in a suit and hat smiles and watches her from the sidewalk.

A photographer has revealed the little-known story behind the iconic image of Marilyn Monroe standing on a New York subway grate with the air from below lifting her skirt.

[Read More]

Sony’s New 100mm f/2.8 Macro GM Is Its Best Macro Lens Ever

A close-up of a black Sony G Master camera lens with various control switches, focus and zoom rings, and detailed markings, set against a plain white background.

Sony has announced its first G Master lens dedicated to macro photography, the Sony FE 100mm f/2.8 Macro G Master OSS. It arrives a decade after Sony's 90mm f/2.8 G Macro OSS, a lens celebrated for its excellent sharpness and close-focusing capabilities. Sony's new lens promises to be superior in every meaningful way and aims to be the best macro lens for full-frame mirrorless cameras, period.

[Read More]

‘Life’ Photographer’s Archive and $1M Donated to Center for Creative Photography

A black-and-white collage shows shadowy silhouettes of people walking on the left, and two boys in cowboy hats reading comic books with their feet up on the right.

The Center for Creative Photography at the University of Arizona has received the archive of renowned Life magazine photographer Benn Mitchell, as well as a $1 million gift.

[Read More]

DxO FilmPack 8 Takes Your Favorite Photos Back Through Time

A desktop computer displays a photo of a person holding a yellow umbrella in the rain. Surrounding the monitor are film cameras, photo film rolls, photography books, and a shelf with film boxes, all set on a wooden desk.

French photography software company DxO has announced FilmPack 8, the latest version of its premier film emulation software. The updated app introduces Time Warp mode, an interactive way to explore the rich history of photography, along with new film rendering options.

[Read More]

Photographer Tells Story Behind Powerful Image of Man Calmly Facing Super Typhoon

A person sits alone on a bench facing a massive, crashing wave with a bridge in the background, under a gray sky. A blue and yellow rectangular logo appears in the upper left corner.

Last week in China, a press photographer's powerful image of a man calmly facing enormous waves whipped up by Super Typhoon Ragasa in Hong Kong went viral across the country.

[Read More]

The World’s First Orchestra Played Entirely by AI Surveillance Cameras

world first object detection ochestra

A team of engineers transformed video surveillance cameras into musical instruments, creating the world’s first orchestra made entirely of cameras.

[Read More]

Trial Opens into Death of Sports Photographer Killed by Runaway Bike

Simon Mitchell photographer

A trial is underway over the death of a sports photographer who was killed by a runaway bike while covering a motorcycle racing competition.

[Read More]

Beautiful Dutch Village That is Magnet for Photographers to Start Charging Tourist Fee

Three traditional Dutch windmills stand by the water at sunset, with colorful clouds in the sky. Wooden docks and small houses are visible near the windmills, surrounded by calm water and lush greenery.

The picturesque Dutch neighborhood of Zaanse Schans near Amsterdam will start charging tourists after a record 2.6 million people visited the small area last year.

[Read More]


These leading Syrian apps are helping rebuild the country after Assad

People working on laptops inside Karma Cafe in the Abu Rummaneh area of Damascus.

China charges ahead as South Korea’s battery giants lose their spark

South Korea’s top battery makers are losing ground as Chinese firms dominate with cheaper technology, factory scale, and state backing.


Are We Alone? NASA’s Habitable Worlds Observatory Aims to Find Out

The Habitable Worlds Observatory is poised to tell us whether Earthlike planets are common—if it can get off the ground

How China’s New Emissions Pledge Could Radically Alter Climate Change

China’s plan to reduce greenhouse gases will largely determine the world’s emissions trajectory, researchers say

Six New Gecko Species Discovered by Loud Barking Mating Calls

Scientists found new gecko species hidden in plain sight in pristine deserts of southern Africa, thanks to their loud, barking mating calls


Ozempic-maker Novo Nordisk to cut jobs at Athlone site

Reports suggest around 115 jobs are set to be lost through mandatory and voluntary redundancies.

Read more: Ozempic-maker Novo Nordisk to cut jobs at Athlone site

Cyberattack brews trouble for Asahi as operations disrupted

As Asahi investigates a system outage, Jaguar Land Rover and Harrods struggle to recover from their own recent breaches.

Read more: Cyberattack brews trouble for Asahi as operations disrupted

Maynooth experts develop way to recover prints from fired bullets

The method has to be tested and validated before it could potentially help law enforcement in criminal investigations.

Read more: Maynooth experts develop way to recover prints from fired bullets

Creative minds: 5 STEM events to cultivate inspiration

If you want to rediscover a passion for STEM, why not consider attending a fun and informative science event?

Read more: Creative minds: 5 STEM events to cultivate inspiration

South Korean chipmaker Rebellions raises $250m backed by Arm, Samsung

The Series C was also backed by Pegatron VC, Lion X Ventures, Korea Development Bank and Korelya Capital.

Read more: South Korean chipmaker Rebellions raises $250m backed by Arm, Samsung

Galway medtech CLS plans 140 jobs to expand into new markets

The Irish-led contract lab and quality management specialist will begin hiring immediately.

Read more: Galway medtech CLS plans 140 jobs to expand into new markets

‘AI a career catalyst’, finds Microsoft Work Trend Index, but access is unequal

The new survey indicates AI’s potential to accelerate professional progress, but finds some people are at risk of being left behind.

Read more: ‘AI a career catalyst’, finds Microsoft Work Trend Index, but access is unequal

Can OpenAI’s Sora 2-powered social media app rival TikTok?

OpenAI is apparently preparing to release a social media app alongside the latest release of its AI video-generation model Sora.

Read more: Can OpenAI’s Sora 2-powered social media app rival TikTok?

What to know about DeepSeek and Anthropic’s latest AI models

Anthropic's Claude Sonnet 4.5 focuses on coding while DeepSeek's V3.2-Exp boasts a reduced compute cost.

Read more: What to know about DeepSeek and Anthropic’s latest AI models


Sora 2

Having watched this morning's Sora 2 introduction video, the most notable feature (aside from audio generation - original Sora was silent, Google's Veo 3 supported audio in May 2025) looks to be what OpenAI are calling "cameos" - the ability to easily capture a video version of yourself or your friends and then use them as characters in generated videos.

My guess is that they are leaning into this based on the incredible success of ChatGPT image generation in March - possibly the most successful product launch of all time, signing up 100 million new users in just the first week after release.

The driving factor for that success? People love being able to create personalized images of themselves, their friends and their family members.

Google saw a similar effect with their Nano Banana image generation model. Gemini VP Josh Woodward tweeted on 24th September:

🍌 @GeminiApp just passed 5 billion images in less than a month.

Sora 2 cameos looks to me like an attempt to capture that same viral magic but for short-form videos, not images.

<p>Tags: <a href="https://simonwillison.net/tags/gemini">gemini</a>, <a href="https://simonwillison.net/tags/generative-ai">generative-ai</a>, <a href="https://simonwillison.net/tags/openai">openai</a>, <a href="https://simonwillison.net/tags/video-models">video-models</a>, <a href="https://simonwillison.net/tags/ai">ai</a>, <a href="https://simonwillison.net/tags/text-to-image">text-to-image</a></p>

Designing agentic loops

Coding agents like Anthropic's Claude Code and OpenAI's Codex CLI represent a genuine step change in how useful LLMs can be for producing working code. These agents can now directly exercise the code they are writing, correct errors, dig through existing implementation details, and even run experiments to find effective code solutions to problems.

As is so often the case with modern AI, there is a great deal of depth involved in unlocking the full potential of these new tools.

A critical new skill to develop is designing agentic loops.

One way to think about coding agents is that they are brute force tools for finding solutions to coding problems. If you can reduce your problem to a clear goal and a set of tools that can iterate towards that goal a coding agent can often brute force its way to an effective solution.

My preferred definition of an LLM agent is something that runs tools in a loop to achieve a goal. The art of using them well is to carefully design the tools and loop for them to use.

The joy of YOLO mode

Agents are inherently dangerous - they can make poor decisions or fall victim to malicious prompt injection attacks, either of which can result in harmful results from tool calls. Since the most powerful coding agent tool is "run this command in the shell" a rogue agent can do anything that you could do by running a command yourself.

To quote Solomon Hykes:

An AI agent is an LLM wrecking its environment in a loop.

Coding agents like Claude Code counter this by defaulting to asking you for approval of almost every command that they run.

This is kind of tedious, but more importantly, it dramatically reduces their effectiveness at solving problems through brute force.

Each of these tools provides its own version of what I like to call YOLO mode, where everything gets approved by default.

This is so dangerous, but it's also key to getting the most productive results!

Here are three key risks to consider from unattended YOLO mode.

  1. Bad shell commands deleting or mangling things you care about.
  2. Exfiltration attacks where something steals files or data visible to the agent - source code or secrets held in environment variables are particularly vulnerable here.
  3. Attacks that use your machine as a proxy to attack another target - for DDoS or to disguise the source of other hacking attacks.

If you want to run YOLO mode anyway, you have a few options:

  1. Run your agent in a secure sandbox that restricts the files and secrets it can access and the network connections it can make.
  2. Use someone else's computer. That way if your agent goes rogue, there's only so much damage they can do, including wasting someone else's CPU cycles.
  3. Take a risk! Try to avoid exposing it to potential sources of malicious instructions and hope you catch any mistakes before they cause any damage.

Most people choose option 3.

Despite the existence of container escapes I think option 1 using Docker or the new Apple container tool is a reasonable risk to accept for most people.

Option 2 is my favorite. I like to use GitHub Codespaces for this - it provides a full container environment on-demand that's accessible through your browser and has a generous free tier too. If anything goes wrong it's a Microsoft Azure machine somewhere that's burning CPU and the worst that can happen is code you checked out into the environment might be exfiltrated by an attacker, or bad code might be pushed to the attached GitHub repository.

There are plenty of other agent-like tools that run code on other people's computers. Code Interpreter mode in both ChatGPT and Claude can go a surprisingly long way here. I've also had a lot of success (ab)using OpenAI's Codex Cloud.

Coding agents themselves implement various levels of sandboxing, but so far I've not seen convincing enough documentation of these to trust them.

Update: It turns out Anthropic have their own documentation on Safe YOLO mode for Claude Code which says:

Letting Claude run arbitrary commands is risky and can result in data loss, system corruption, or even data exfiltration (e.g., via prompt injection attacks). To minimize these risks, use --dangerously-skip-permissions in a container without internet access. You can follow this reference implementation using Docker Dev Containers.

Locking internet access down to a list of trusted hosts is a great way to prevent exfiltration attacks from stealing your private source code.

Picking the right tools for the loop

Now that we've found a safe (enough) way to run in YOLO mode, the next step is to decide which tools we need to make available to the coding agent.

You can bring MCP into the mix at this point, but I find it's usually more productive to think in terms of shell commands instead. Coding agents are really good at running shell commands!

If your environment allows them the necessary network access, they can also pull down additional packages from NPM and PyPI and similar. Ensuring your agent runs in an environment where random package installs don't break things on your main computer is an important consideration as well!

Rather than leaning on MCP, I like to create an AGENTS.md (or equivalent) file with details of packages I think they may need to use.

For a project that involved taking screenshots of various websites I installed my own shot-scraper CLI tool and dropped the following in AGENTS.md:

To take a screenshot, run:

shot-scraper http://www.example.com/ -w 800 -o example.jpg

Just that one example is enough for the agent to guess how to swap out the URL and filename for other screenshots.

Good LLMs already know how to use a bewildering array of existing tools. If you say "use playwright python" or "use ffmpeg" most models will use those effectively - and since they're running in a loop they can usually recover from mistakes they make at first and figure out the right incantations without extra guidance.

Issuing tightly scoped credentials

In addition to exposing the right commands, we also need to consider what credentials we should expose to those commands.

Ideally we wouldn't need any credentials at all - plenty of work can be done without signing into anything or providing an API key - but certain problems will require authenticated access.

This is a deep topic in itself, but I have two key recommendations here:

  1. Try to provide credentials to test or staging environments where any damage can be well contained.
  2. If a credential can spend money, set a tight budget limit.

I'll use an example to illustrate. A while ago I was investigating slow cold start times for a scale-to-zero application I was running on Fly.io.

I realized I could work a lot faster if I gave Claude Code the ability to directly edit Dockerfiles, deploy them to a Fly account and measure how long they took to launch.

Fly allows you to create organizations, and you can set a budget limit for those organizations and issue a Fly API key that can only create or modify apps within that organization...

So I created a dedicated organization for just this one investigation, set a $5 budget, issued an API key and set Claude Code loose on it!

In that particular case the results weren't useful enough to describe in more detail, but this was the project where I first realized that "designing an agentic loop" was an important skill to develop.

When to design an agentic loop

Not every problem responds well to this pattern of working. The thing to look out for here are problems with clear success criteria where finding a good solution is likely to involve (potentially slightly tedious) trial and error.

Any time you find yourself thinking "ugh, I'm going to have to try a lot of variations here" is a strong signal that an agentic loop might be worth trying!

A few examples:

  • Debugging: a test is failing and you need to investigate the root cause. Coding agents that can already run your tests can likely do this without any extra setup.
  • Performance optimization: this SQL query is too slow, would adding an index help? Have your agent benchmark the query and then add and drop indexes (in an isolated development environment!) to measure their impact.
  • Upgrading dependencies: you've fallen behind on a bunch of dependency upgrades? If your test suite is solid an agentic loop can upgrade them all for you and make any minor updates needed to reflect breaking changes. Make sure a copy of the relevant release notes is available, or that the agent knows where to find them itself.
  • Optimizing container sizes: Docker container feeling uncomfortably large? Have your agent try different base images and iterate on the Dockerfile to try to shrink it, while keeping the tests passing.

A common theme in all of these is automated tests. The value you can get from coding agents and other LLM coding tools is massively amplified by a good, cleanly passing test suite. Thankfully LLMs are great for accelerating the process of putting one of those together, if you don't have one yet.

This is still a very fresh area

Designing agentic loops is a very new skill - Claude Code was first released in just February 2025!

I'm hoping that giving it a clear name can help us have productive conversations about it. There's so much more to figure out about how to use these tools as effectively as possible.

    <p>Tags: <a href="https://simonwillison.net/tags/definitions">definitions</a>, <a href="https://simonwillison.net/tags/ai">ai</a>, <a href="https://simonwillison.net/tags/generative-ai">generative-ai</a>, <a href="https://simonwillison.net/tags/llms">llms</a>, <a href="https://simonwillison.net/tags/ai-assisted-programming">ai-assisted-programming</a>, <a href="https://simonwillison.net/tags/ai-agents">ai-agents</a>, <a href="https://simonwillison.net/tags/coding-agents">coding-agents</a></p>

Turning investments into impact: Stack Overflow for Teams 2025.7

Over the past few releases, we’ve been investing in the foundation of Stack Overflow for Teams—strengthening infrastructure, modernizing integrations, and preparing for bigger shifts to come.

As your AI gets smarter, so must your API

Ryan sits down with Marco Palladino, CTO of Kong, to talk about the rise of AI agents and their impact on API consumption, the MCP protocol as a new standard for agents, the importance of observability and security in AI systems, and the importance for businesses and entrepreneurs to leverage opportunities in the agentic AI space now.


What Can You Do With a Free ODSC AI West Expo Pass?

Hoping to attend ODSC AI West 2025, but a bit short on cash? Don’t worry, the team has you covered so that you can still be a part of ODSC AI West 2025 this October 28th-30th and experience the latest in AI, all for free. Thanks to the Expo Pass, you can tailor your ODSC experience based on your personal goals and preferences that you’ll never forget. The best part? Well, by the end, you’ll see that attending ODSC East was a great investment of your time and something to look forward to in 2025.

So here’s a detailed rundown of what you can expect from the world’s leading data science conference with Expo+ passes.

Expo & Demo Hall

The AI Expo & Demo Hall is where the future comes to life in the present. The team in the back end works hard so that at each and every ODSC conference, attendees get to meet emerging leaders, startups, and companies who are reshaping the world of data science. These trailblazers are excited to attend ODSC East so they can show the world what they’ve spent countless hours creating. The best part about the AI Expo & Demo Hall is that you get an entire overview of the data science world.

From Keynotes to product solution showcases, networking, and AI solutions with enterprise-level players showing off some amazing demos that trace the current trends in AI.

Keynotes & Featured Speakers

The best part about a solid conference is the speakers with rich and diverse backgrounds. Let’s take a quick glance at a few of our Keynote & Feature speakers at ODSC AI West 2025:

  • Harrison Chase, CEO and Co-founder of LangChain
  • Session: Context Engineering for AI Agents
  • Sinan Ozdemir, AI Author and Educator
  • Session: Beyond Benchmarks — The Widening Gap Between Testing and Reality
  • João (Joe) Moura, CEO of CrewAI
  • Session: Building, Deploying and Scaling Agents in Production
  • Talia Kohan, Staff Developer Advocation at Postman
  • Session: You Can’t Do AI Without Quality APIs
  • Helen O’Sullivan. AI Solutions Specialist Manager at Dell Technologies
  • Byung-Gon Chun, Founder & CEO of FriendliAI
  • Scaling Inference for Generative AI
  • Emmett Shear, Co-founder of Softmax, former CEO of Twitch, and former interim Ceo of OpenAI

Demo Sessions

We’ll have over 30 AI Insight demo sessions for you to check out as well! These sessions showcase how to use some of the biggest tools in AI today. Some sessions include:

  • Unblock Engineering Teams with AI Agents
  • Sphinx — The First AI Copilot for Jupyter Notebooks
  • Building Feedback-Driven Agentic Workflows
  • Accelerate Data Science at Your Desk with Dell and NVIDIA
  • Inference Engineering for Multimodal AI
  • Scaling Search & Observability for AI Workloads with Elastic
  • Solving Reproducibility in LLM Training and RAG with LakeFS
  • Trusting Your Data: Observability for Reliable AI and Analytics
  • Squadbase — Vibe Coding Platform for Business Intelligence
  • Fast, Cost-Efficient Inference to Scale Agentic AI
  • The Superagentic Future: Scaling AI Agents from Lab to Enterprise
  • Solve the Composite AI Puzzle with Optimization for Explainable Decision Intelligence ​

Networking and Other Events

Looking to meet some people? Here are some events you can attend:

  • AI After Dark (Oct 30th from 5–7 PM): Celebrate All Hallows Eve with ODSC West!
  • Networking receptions all three days — stay tuned for details
  • Women Ignite. Exclusive Lighting Talks and Networking Events
  • Coffee and Chat Meetups

Sign up for free

Some good stuff, right? What are you waiting for? Get your free ODSC AI West Open Pass here and get ready to experience all of the above.

Now if you want more from your experience, including 300+ hours of hands-on training sessions, workshops, and talks on Gen AI, LLMs, Machine Learning, Data Engineering, and more, check out our paid passes today. They’re running out quickly, so don’t miss out!

The Rise of AI-Powered Integrated Development Environments

Software engineering tools have continually evolved to boost developer productivity. The rise of Integrated Development Environments (IDEs) decades ago transformed how engineers write and manage code, moving us beyond the era of basic text editors and manual command-line workflows. IDEs consolidated editing, compiling, debugging, and other tasks into one interface — drastically reducing context-switching and setup overhead.

Today, we are witnessing another paradigm shift. Artificial intelligence is emerging as a game-changer in software development, effectively turning our coding tools into intelligent assistants. In other words, AI in software development is poised to elevate our coding environments into “super-IDEs” that automate routine work and help engineers focus on creative problem-solving.

From Text Editors to Modern IDEs

Before the advent of IDEs, developers wrote code in simple editors (or even on paper) and used separate compiler and debugger programs via the command line. This was a slow and error-prone process. Modern IDEs like Eclipse, Visual Studio, and IntelliJ changed the game by bundling everything a programmer needs — from syntax highlighting and code completion to automated builds and debugging — into a single application.

With features such as inline error detection, one-click compilation, integrated version control, and real-time feedback, IDEs dramatically reduced setup friction and human error while accelerating iteration speed.

Developers no longer had to juggle multiple tools; the IDE became a one-stop workspace that streamlined software development. The result was a significant boost in productivity and code quality, as the environment itself started catching mistakes and offering guidance. This historical leap in tooling set the stage for even greater automation in coding tasks — a promise now being realized by AI-driven enhancements.

AI-Powered “Super IDEs”: The Next Evolution

In 2025, the landscape of software development is shifting again with the introduction of AI-powered coding assistants integrated directly into our development environments. These systems are more than just advanced autocompletion; they’re effectively co-developers capable of writing, fixing, and optimizing code alongside human programmers. Often referred to as “super-IDEs,” AI-enhanced development tools leverage large language models and machine learning to understand context and intent. The impact is comparable to the jump from assembly to high-level languages — a new abstraction layer that automates away low-level or repetitive programming tasks.

Just as high-level languages freed developers from worrying about memory addresses or CPU instructions, AI-driven IDE features are freeing us from boilerplate code and trivial bug hunts. Below, we explore how AI is augmenting different facets of the development lifecycle, from coding to testing and deployment.

Smarter Code Generation and Autocomplete

Generative AI is enabling IDEs to write code for us in ways that were unimaginable a few years ago. AI code generators like GitHub Copilot (powered by OpenAI Codex) can turn natural language prompts into entire code snippets or functions, saving developers from typing out routine pieces of logic. In effect, the IDE can now suggest not just the next word or variable name, but whole blocks of code to implement a given intent. By automating repetitive boilerplate and providing intelligent code completions, these tools free up time for engineers to concentrate on more complex and creative tasks.

For example, instead of writing yet another data access layer or API client by hand, a developer can describe the requirements in plain English and let the AI generate an initial implementation. The human coder’s role then shifts to reviewing and refining the AI’s output. This collaborative workflow leads to writing cleaner code faster, as the AI learns from vast amounts of programming data and offers best-practice suggestions. It’s a leap forward for productivity — one that underscores why software engineers should be embracing AI in their day-to-day work.

Intelligent Testing and QA

AI is also revolutionizing how we test and debug software. Traditionally, writing unit tests and tracking down bugs can be as time-consuming as writing the application code itself. Now, machine learning–powered testing tools can automatically generate test cases and even write test scripts by analyzing your codebase. Platforms like Testim and Applitools use AI to examine user interfaces and workflows, quickly detecting anomalies or regressions that manual testing might miss.

One key advantage of these AI-driven testing tools is their ability to learn and adapt over time. As your software evolves, the AI updates the tests or highlights new edge cases, so you’re continuously covered without having to rewrite tests for every change. This results in faster testing cycles and more reliable software releases, with significantly reduced human error. On the debugging front, AI assistants can analyze error logs and stack traces to pinpoint likely causes of failures. Instead of spending hours sifting through code, a developer can get an immediate hint about which function or commit introduced a bug. The outcome is a more efficient QA process: engineers spend less time on tedious testing and debugging, and more time ensuring the software meets users’ needs.

Streamlining DevOps and Deployment with AI

Modern software engineering doesn’t stop at writing code — it also involves deploying and maintaining applications reliably. Here too, AI is making its mark. In DevOps and CI/CD pipelines, AI-driven tools are helping teams foresee and prevent problems before they disrupt users. For instance, solutions like Harness and JenkinsX utilize machine learning to predict potential deployment failures and automate safe rollbacks in the event of an issue.

An AI-enhanced pipeline can monitor your build and deployment process, flagging unusual patterns that might indicate an error, or optimizing resource usage for better performance. Furthermore, AI systems can continuously watch over running applications in production: analyzing server logs, monitoring metrics, and detecting performance bottlenecks or security anomalies in real time.

This proactive monitoring means engineers can address issues before they escalate into outages, thus reducing downtime and improving system stability. By integrating AI into DevOps, software teams achieve smoother, more efficient deployments with minimal manual intervention. In short, the deployment process becomes smarter and more autonomous — your infrastructure almost manages itself, guided by AI insights. This level of automation and foresight was previously unheard of in earlier eras of software development, highlighting how AI engineering is transforming software engineering practices at every level.

Code Quality and Security Enhancement

Quality assurance and security are non-negotiable aspects of professional software development, and AI is upping the game here as well. Traditional static code analyzers can catch syntax errors or known bad practices, but AI-powered code analysis goes further. Tools like DeepCode and Snyk use AI models trained on thousands of open-source projects and security vulnerability databases to scan your code in real-time.

They can flag subtle bugs, security holes, or inefficient code paths that might not be obvious even to seasoned developers. Because these tools have “seen” countless examples of code, they provide intelligent suggestions — for example, warning you of a potential SQL injection or recommending a more efficient algorithm — informed by patterns in vast codebases. This means potential issues are caught early in the development cycle rather than after deployment. Some AI systems can even predict which parts of a project are most likely to introduce bugs in the future, letting teams focus their code reviews on high-risk modules.

The end result is safer, more robust software delivered without an army of security experts or QA engineers combing through every line. Developers working with these AI-augmented tools can code with greater confidence, knowing the IDE is actively looking over their shoulder for mistakes or vulnerabilities. Embracing such AI assistance is rapidly becoming key to maintaining high code quality as projects grow in size and complexity.

Conclusion

From writing code and generating tests to deploying applications and ensuring security, AI is now woven into every stage of the development lifecycle. In effect, AI is transforming the software development landscape end-to-end. Engineers who leverage these new AI “super IDE” capabilities can accelerate their workflow, reduce tedious busywork, and produce more reliable software — all while focusing on the creative and complex aspects of engineering that truly require human insight. In a field that’s evolving rapidly, embracing AI tools and workflows is becoming essential for staying ahead of the curve.

To further explore how AI and software engineering intersect and to upskill yourself in these cutting-edge tools, consider attending ODSC AI West 2025, the leading applied data science conference. Join thousands of practitioners at ODSC AI West 2025 to gain hands-on training in generative AI, LLMs, RAG, AI safety, and more through expert-led workshops and bootcamps. You’ll get to explore cutting-edge tools in the AI Expo Hall, connect with industry leaders, and even tailor your experience with flexible 1- to 3-day passes.

Don’t miss this chance to expand your AI skills and network — register now to secure your spot!


Starbase hires Cameron County to police its streets and jail its offenders

Starbase is in the business of launching rockets, not policing, according to two new agreements.

Toyota adds another $1.5B to its bet on startups at every stage

Toyota made two announcements that reflect the automaker's growing interest in startups working on mobility, climate, AI, sustainability, and industrial automation.

A breach every month raises doubts about South Korea’s digital defenses

Known for its blazing fast internet and home to some of the world’s biggest tech giants, South Korea has also faced a string of data breaches and cybersecurity lapses that has struggled to match the pace of its digital ambitions.

Anonymous question app Sendit deceived children and illegally collected their data, FTC alleges

On Sendit, teens can send each other anonymous questions via integrations with Instagram or Snapchat.

Ted Cruz blocks bill that would extend privacy protections to all Americans

The Texas senator blocked a bill that would have prevented data brokers from selling personal data on anyone in the United States, and not just federal lawmakers and government officials.

Former OpenAI and DeepMind researchers raise whopping $300M seed to automate science

Periodic Labs has raised from a tech industry who's who, including Andreessen Horowitz, Nvidia, Elad Gil, Jeff Dean, Eric Schmidt, and Jeff Bezos.

OpenAI is launching the Sora app, its own TikTok competitor, alongside the Sora 2 model

The social app Sora will let users generate videos of themselves and their friends, which they can share in a TikTok-like feed.

AI hires or human hustle? Inside the next frontier of startup operations at TechCrunch Disrupt 2025

What happens when your first 10 hires aren’t people at all? At TechCrunch Disrupt 2025, we’re digging into the new wave of startups replacing or augmenting early employees with AI agents.

Fubo shareholders approve Hulu Live TV deal

Initially announced in January, the deal brings the companies closer to finalizing an agreement that is anticipated to disrupt the streaming industry by making Hulu a far bigger threat to its larger rival, YouTube.

ChatGPT: Everything you need to know about the AI-powered chatbot

A timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year.

A comprehensive list of 2025 tech layoffs

A complete list of all the known layoffs in tech, from Big Tech to startups, broken down by month throughout 2024.

AI note-taking app Granola adds a repeatable prompts feature

Granola is launching a way for users to repeatedly use prompts via a new feature called Recipes.

Hance will demo its kilobyte-size AI audio-processing software at TechCrunch Disrupt 2025

Hance is working on low energy-consuming, on-device processing that's already attracted the likes of Intel.

PayPal’s Honey to integrate with ChatGPT and other AIs for shopping assistance

The features will provide AI chatbot users, who are researching items they want to purchase, Honey's product recommendations, pricing, and access to deals.

Japan’s beer-making giant Asahi stops production after cyberattack

A day after one of Japan's biggest brewers, Asahi Group, announced it suspended production due to a cyberattack, the company said it has no timeline for its recovery.

Why you can’t miss the aerospace content at TechCrunch Disrupt 2025

At TechCrunch Disrupt, the Aerospace Corporation is pulling back the curtain on how artificial intelligence is reshaping the space economy — through bold startups and cutting-edge research.

Amazon unveils new Echo devices, powered by its AI, Alexa+

Amazon debuts new devices, the Dot Max, Echo Studio, Echo Show 8, and Echo Show 11, powered by AI.

Venmo and PayPal users will finally be able to send money to each other

For years, customers have been using convoluted workarounds to transfer money between the two services.

Ring cameras can now recognize faces and help to find lost pets

Amazon unveiled new AI features for its Ring devices, such as the ability to identify familiar faces and help locate a lost dog.

AI that talks back: Character AI in the spotlight with Karandeep Anand at TechCrunch Disrupt 2025

Karandeep Anand, CEO of Character AI, joins the AI Stage at TechCrunch Disrupt 2025 to discuss human-like AI companions, ethical and technical challenges, and the legal scrutiny facing conversational AI.


OpenAI says Sora has guardrails intended to block depictions of public figures and to ensure that a user's likeness is used only with their consent, via cameos (Hayden Field/The Verge)

Hayden Field / The Verge:
OpenAI says Sora has guardrails intended to block depictions of public figures and to ensure that a user's likeness is used only with their consent, via cameos  —  OpenAI's new Sora AI video app is iPhone-only, for now. … OpenAI has a new version of the Sora AI video generator that it launched …

US chipmaker Wolfspeed's shares close up 29.41% after the company emerged from Chapter 11 bankruptcy and achieved its goal of reducing overall debt by ~70% (Reuters)

Reuters:
US chipmaker Wolfspeed's shares close up 29.41% after the company emerged from Chapter 11 bankruptcy and achieved its goal of reducing overall debt by ~70%  —  Wolfspeed (WOLF.N) shares surged 33% on Tuesday, after the chipmaker successfully emerged from Chapter 11 bankruptcy with a substantially reduced debt load.

Anthropic's System Card: Claude Sonnet 4.5 was able to recognize many alignment evaluation environments as tests and would modify its behavior accordingly (Celia Ford/Transformer)

Celia Ford / Transformer:
Anthropic's System Card: Claude Sonnet 4.5 was able to recognize many alignment evaluation environments as tests and would modify its behavior accordingly  —  Anthropic's new model appears to use “eval awareness” to be on its best behavior  —  Anthropic's newly-released Claude Sonnet 4.5 is …

The FTC sues Zillow and Redfin, alleging they violated antitrust laws when Zillow paid Redfin $100M to stop competing against it in online rental listings (Jody Godoy/Reuters)

Jody Godoy / Reuters:
The FTC sues Zillow and Redfin, alleging they violated antitrust laws when Zillow paid Redfin $100M to stop competing against it in online rental listings  —  - FTC lawsuit claims Zillow-Redfin deal reduces competition in rental listings market  — Zillow claims partnership benefits renters and property managers

President Trump signs an executive order directing his administration to invest $50M in AI-driven pediatric cancer research (Tina Reed/Axios)

Tina Reed / Axios:
President Trump signs an executive order directing his administration to invest $50M in AI-driven pediatric cancer research  —  President Trump signed an order Tuesday directing his administration to invest $50 million in AI-driven pediatric cancer research.

Sora 2 is available for free with usage limits for all users, while ChatGPT Pro subscribers have access to the higher-quality Sora 2 Pro model (Carl Franzen/Venturebeat)

Carl Franzen / Venturebeat:
Sora 2 is available for free with usage limits for all users, while ChatGPT Pro subscribers have access to the higher-quality Sora 2 Pro model  —  OpenAI today announced the release of Sora 2, its latest video generation model, which now includes AI generated audio matching the generated video, as well.

Disney sent a cease-and-desist to Character.AI for misusing its characters in harmful ways, including sexual exploitation; the startup removed Disney characters (Sara Fischer/Axios)

Sara Fischer / Axios:
Disney sent a cease-and-desist to Character.AI for misusing its characters in harmful ways, including sexual exploitation; the startup removed Disney characters  —  - Disney said the report underscores its concern with the way its characters have been utilized on the platform.

Stripe launches Open Issuance, a stablecoin issuance platform built on Stripe's Bridge, which lets companies create their own stablecoins with minimal coding (Krisztian Sandor/CoinDesk)

Krisztian Sandor / CoinDesk:
Stripe launches Open Issuance, a stablecoin issuance platform built on Stripe's Bridge, which lets companies create their own stablecoins with minimal coding  —  Payments giant Stripe unveiled a stablecoin issuance platform and AI commerce tools at its New York showcase on Tuesday …

Sources: Silver Lake told investors in its latest buyout fund that it was going back to its strategy of making big bets on technology companies (Wall Street Journal)

Wall Street Journal:
Sources: Silver Lake told investors in its latest buyout fund that it was going back to its strategy of making big bets on technology companies  —  Private-equity firm's co-CEO, Egon Durban, had been eyeing Electronic Arts for years  —  Silver Lake's Egon Durban had his eye …

Amazon partners with FanDuel to allow viewers to track their NBA bets in real-time during NBA games on Prime Video and adds shoppable NBA merch to the games (Kurt Schlosser/GeekWire)

Kurt Schlosser / GeekWire:
Amazon partners with FanDuel to allow viewers to track their NBA bets in real-time during NBA games on Prime Video and adds shoppable NBA merch to the games  —  Amazon is planning a number of interactive features and personalization for fans when it begins airing NBA games on Prime Video starting Oct. 24.

OpenAI launches Sora 2, which it says may be the "GPT‑3.5 moment for video" with the ability to follow intricate instructions spanning multiple shots (OpenAI)

OpenAI:
OpenAI launches Sora 2, which it says may be the “GPT‑3.5 moment for video” with the ability to follow intricate instructions spanning multiple shots  —  Our latest video generation model is more physically accurate, realistic, and controllable than prior systems.

Google's Ad Transparency tool no longer shows any political ads, past or present, from any EU countries ahead of new EU ad transparency regulations (Samantha Cole/404 Media)

Samantha Cole / 404 Media:
Google's Ad Transparency tool no longer shows any political ads, past or present, from any EU countries ahead of new EU ad transparency regulations  —  Google's Ad Transparency tool no longer shows political online advertisements that ran on its platforms, in the past or present …

OpenAI releases an invitation-only Sora app on iOS, powered by Sora 2, to let people create and share AI-generated videos of themselves and their friends (Ina Fried/Axios)

Ina Fried / Axios:
OpenAI releases an invitation-only Sora app on iOS, powered by Sora 2, to let people create and share AI-generated videos of themselves and their friends  —  An Android version will follow eventually, OpenAI told Axios.  — The social app is powered by Sora 2, a new version of OpenAI's video model, which also launched Tuesday.

In the US v. Google ad tech trial, Google says it's willing to provide publishers with the data for how its ad server decides what online display ads to show (Bloomberg)

Bloomberg:
In the US v. Google ad tech trial, Google says it's willing to provide publishers with the data for how its ad server decides what online display ads to show  —  Google is willing to share more data with publishers to remedy a court's finding that the Alphabet Inc. unit illegally monopolized …

Interviews with Panos Panay and other Amazon execs about Alexa+, recruiting old Microsoft colleagues, merging hardware and software teams like Apple, more (Mark Gurman/Bloomberg)

Mark Gurman / Bloomberg:
Interviews with Panos Panay and other Amazon execs about Alexa+, recruiting old Microsoft colleagues, merging hardware and software teams like Apple, more  —  Under Microsoft veteran Panos Panay, the company looks to add polish to its gadgets at every price level.


How much digital sovereignty does the UK have left?

Policy experts and CIOs alike are fretting over the extent to which the UK tech sector is beholden to US tech giants.

For Cisco’s Scott Manson, security is like salt in your diet – you can have too much or too little.

Cisco’s Director for Cyber Security and Resilience for the UK and Ireland on the ills of ransomware, the virtues of zero trust, and why we may be turning a corner in the war against cybercrime.


Amazon's new budget Fire TV stick ditches Fire OS for Vega OS


Amazon's new $40 Fire TV stick confirms earlier reports that the company might move away from Fire OS in favor of a leaner alternative. However, other reports suggest that the company might eventually move in the opposite direction by fully embracing Android.

Read Entire Article

MSI Afterburner 4.6.6 final release arrives with expanded GPU compatibility


MSI Afterburner 4.6.6 expands hardware compatibility to include RTX 50 GPUs with quad-fan control, and unofficial support for AMD RDNA 4 cards like the Radeon RX 9000 series. This update also adds Windows 11 skins, enhanced voltage/frequency curve editor modes, and upgrades RTSS to v7.3.7.



Read Entire Article

Google launches AI ransomware detection in Drive desktop, trained on millions of attack samples


Google believes that the constantly evolving ransomware threat requires a novel approach to prevention and detection. To that end, the company has announced a new AI-powered anti-ransomware feature for its Drive desktop utility, designed to stop file-encrypting malware even after it has breached a system.

Read Entire Article

Microsoft Office is down to $39 for a lifetime license


Word, Excel, and PowerPoint remain industry standards, but a Microsoft 365 subscription may not fit your budget. With Microsoft Office Professional 2021, you get a one-time purchase that includes Word, Excel, PowerPoint, Outlook, Access, Publisher, and OneNote. This standalone license means no subscriptions or hidden fees, just a complete suite...

Read Entire Article

Anthropic launches Claude Sonnet 4.5 with longer coding sessions and enhanced safety


According to Anthropic, Claude Sonnet 4.5 can maintain autonomous coding sessions for up to 30 hours – a substantial increase over the company's previous Claude Opus 4 model, which supported roughly seven uninterrupted hours. The firm claims that Sonnet 4.5 is stronger "in almost every way" compared with earlier versions,...

Read Entire Article

SteelSeries debuts Arctis Nova Elite for gamer audiophiles with deep pockets


SteelSeries' latest premium wireless gaming headset offers broad platform connectivity, Hi-Res certification, and effective active noise cancellation. However, reviewers note that many of these features are overkill if you don't own multiple consoles, and at $699, the headset is extremely expensive.



Read Entire Article

F-Droid warns crackdown on Android app sideloading could kill open app stores


F-Droid is raising alarms over Google's recent decision to strictly limit app sideloading on Android. After 15 years on the market, the alternative app store now faces possible closure, with Mountain View reportedly aiming to tighten its de facto monopoly over the once-open mobile platform.

Read Entire Article

Minecraft: The Copper Age is now available for Java and Bedrock editions


Minecraft's latest drop adds copper chests, golems, armor, weapons, equipment, and decorations. Shelfs have also been introduced along with other exciting features.



Read Entire Article

LockBit ransomware returns, targeting Windows, Linux, and VMware ESXi


Trend Micro researchers are warning that the criminal group behind LockBit has released a new version of its ransomware platform, significantly escalating the threat to enterprise systems by targeting multiple operating environments simultaneously. According to a detailed analysis of samples collected from recent attacks, the new strain – LockBit 5.0...

Read Entire Article

Logitech MX Master 4 launches with haptic feedback, check out the reviews


The Logitech MX Master 4 introduces a new haptic touchpad, an innovative Action Rings function, and improved repairability to an already excellent mouse known for its silent operation and precise sensors. However, reviewers note that the rubber surface is gone, and the mouse is now bigger and heavier than before.



Read Entire Article

Intel Granite Rapids-WS leak sheds light on Threadripper Pro 9995WX rival


A recent listing on the OpenBenchmarking database for the "Intel 0000" processor reveals a CPU with 86 cores, 172 threads, and a 4.8 GHz clock speed. It is worth noting that the tested unit was likely an engineering sample, meaning the final retail clocks could differ.

Read Entire Article

California becomes first state to require AI companies to disclose safety protocols


California has become the first state to require major artificial intelligence companies to make their safety practices public. Governor Gavin Newsom signed the bill, known as SB 53, after months of debate between lawmakers and technology firms including OpenAI, Meta, and Anthropic. The law is already attracting attention in Washington...

Read Entire Article

Sony PlayStation 5 Pro refresh could include new DualSense V3 controller with swappable battery


The first rumors that Sony was refreshing what is already a mid-cycle refresh arrived earlier this month. Now, a report from Polish outlet PPE has some more details.

Read Entire Article

Tech giants look to small nuclear to power AI's next phase


As the demand for cleaner energy grows, small modular reactors (SMRs) are drawing fresh attention, investment, and regulatory scrutiny. Though the idea dates back to the earliest days of commercial nuclear power, engineers in the 1960s pivoted toward ever-larger plants, betting that scale would deliver higher efficiency and lower costs.

Read Entire Article

Microsoft brings "vibe working" to Office with new AI Agent Mode


Microsoft has introduced a significant expansion of artificial intelligence-driven features within its Office suite, enabling users to generate highly complex spreadsheets, documents, and presentations using conversational prompts. The new capabilities, called Agent Mode and Office Agent, extend across Excel, Word, and PowerPoint, signaling a shift toward vibe working – an...

Read Entire Article

YouTube settles Donald Trump lawsuit over 2021 account suspension for $24.5 million


YouTube's move means that all three of the platforms that Trump sued over his suspensions in the wake of January 6 – the other two being Facebook/Meta and X/Twitter – have now settled. Trump argued that the suspensions were an infringement of his First Amendment rights.

Read Entire Article

Open Printer is a fully open-source inkjet with DRM-free ink and no subscriptions


Paris-based firm Open Tools plans to launch a crowdfunding campaign for a printer that focuses on repairability and customization, with no restrictions on how users refill ink.

Read Entire Article

World's tallest bridge debuts in China, soaring 2,000 feet above a canyon


The new bridge eclipses another structure in Guizhou, the Duge Bridge, which opened in 2016 and is now the second-highest bridge in the world. That competition for superlatives reflects a broader pattern in China, where engineering mega-projects have become symbols of modernization and national achievement.

Read Entire Article

Samsung Galaxy Ring swells and crushes user's finger, causing missed flight and hospital visit


Daniel Rotar, from YouTube channel ZONEofTECH, posted on X that his Galaxy Ring had started swelling while he was wearing it – you can see in the image how the ring is crushing his finger as the inner casing warps inwards.

Read Entire Article


Google expands AI Mode with visual search and new features

Google is adding new visual search tools to AI Mode, letting users search for images using natural language and save results directly.

The article Google expands AI Mode with visual search and new features appeared first on THE DECODER.

OpenAI unveils Sora 2 video model with realistic physics, high-quality audio, and a new social app

OpenAI's new Sora 2 model pushes AI video closer to the mainstream, adding more realistic physics, better control, and, for the first time, high-quality audio. The launch also includes a Sora iOS app built for sharing AI-generated videos with friends.

The article OpenAI unveils Sora 2 video model with realistic physics, high-quality audio, and a new social app appeared first on THE DECODER.

Deepseek slashes API prices by up to 75 percent with its latest V3.2 model

Deepseek has rolled out its experimental language model, Deepseek-V3.2-Exp, building on the recent V3.1-Terminus release.

The article Deepseek slashes API prices by up to 75 percent with its latest V3.2 model appeared first on THE DECODER.


JFrog Maps Strategy for AI-Driven Development Future

The world of software development is a relentless treadmill that constantly accelerates to meet the demands of enterprise users. Within

The post JFrog Maps Strategy for AI-Driven Development Future appeared first on The New Stack.

OutSystems Launches a Low-Code Workbench for Building Enterprise AI Agents

OutSystems was one of the early players in the low-code/no-code space. Like many of its competitors, including Microsoft’s Power Apps

The post OutSystems Launches a Low-Code Workbench for Building Enterprise AI Agents appeared first on The New Stack.

How WebMCP Lets Developers Control AI Agents With JavaScript

AI agents on a website

This year, MCP (Model Context Protocol) has become the glue that connects AI to the web. Following MCP-UI and NLWeb,

The post How WebMCP Lets Developers Control AI Agents With JavaScript appeared first on The New Stack.

How a Shared Test Suite Fixed the Web’s Biggest Problems

two giraffes eating

It’s a key part of how web standards and browsers are developed, which grew from tiny beginnings, depends on a

The post How a Shared Test Suite Fixed the Web’s Biggest Problems appeared first on The New Stack.

Beyond Basic Scaling: Advanced Kubernetes Resource Strategies

"Beyond Basic Scaling: Advanced Kubernetes Resource Strategies" featured image. Three bears in snow

Trying to set resource requests and limits in Kubernetes is kind of like the story of “Goldilocks and the Three

The post Beyond Basic Scaling: Advanced Kubernetes Resource Strategies appeared first on The New Stack.

Broadcom Ends Free Bitnami Images, Forcing Users To Find Alternatives

This week, users of Helm and other cloud native open source projects will have to find other free sources for

The post Broadcom Ends Free Bitnami Images, Forcing Users To Find Alternatives appeared first on The New Stack.

Taming AI Observability: Control Is the Key to Success

Global monitoring concept with light-up map of Europe, symbols.

AI is moving fast. In fact, AI advancement and adoption are moving faster than any shift we’ve seen since cloud

The post Taming AI Observability: Control Is the Key to Success appeared first on The New Stack.

What Is a Software Catalog and Why Should You Have One?

Person with laptop, multiple files representing software catalog.

As your organization grows, so does the number of tools, services and libraries your teams rely on. New internal services

The post What Is a Software Catalog and Why Should You Have One? appeared first on The New Stack.


Microsoft moves to the uncanny valley with creepy Copilot avatars that stare at you and say your name

Yep, we're sure that will win folks over

Microsoft is testing talking avatars for Copilot to see if users feel more at ease chatting with a face instead of just a text box. Our US Editor tried them out, only to find the digital stare was more creepy than comforting.…

Google bolts AI into Drive to catch ransomware, but crooks not shaking yet

Stopping the spread isn't the same as stopping attacks, period

Google on Tuesday rolled out a new AI tool in Drive for desktop that it says will pause syncing to limit ransomware damage, but it won't stop attacks outright.…

California lawmakers pretend to regulate AI, create a pile of paperwork

LLM makers have to file a steady stream of reports in the name of transparency

A year after vetoing a tougher bill, California Gov Gavin Newsom has signed the nation's first AI transparency law, forcing big model developers to publish frameworks and file incident reports, but critics argue it's more paperwork than protection.…

Tesla on the wrong tracks with Fail Self Driving, Senators worry

Full Self-Driving mode could be on-track to cause serious accidents at train crossings

A pair of US senators is asking the federal traffic safety agency to look into Tesla's self-driving software in response to complaints that it fails to stop for trains at railroad crossings.…

ServiceNow thinks you're doing AI fast and wrong

And of course thinks it can help you do it right, once it gets around to delivering

Three weeks after releasing one of its biannual platform upgrades, ServiceNow has started delivering an "AI Experience."…


OpenAI's e-commerce takeover

PLUS: Anthropic launches Claude Sonnet 4.5


The Sequence Knowledge #728: Circuits, Circuits,Circuits

An overview of circuit tracing in AI interpretability.


Kepler Cheuvreux promotes from within for new global head of equity execution sales

The appointment also coincides with an additional promotion for the firm’s new global head of low touch and portfolio trading. 

The post Kepler Cheuvreux promotes from within for new global head of equity execution sales appeared first on The TRADE.

The TRADE’s Q3 Magazine: Now available online!

The latest edition of The TRADE Magazine is now live, featuring an abundance of exciting new content; check out the highlights here.  

The post The TRADE’s Q3 Magazine: Now available online! appeared first on The TRADE.

Euronext launches first fully integrated marketplace for European ETFs and ETPs

The offering will unify listing, trading, clearing and settlement and is set to address fragmentation and distribution issues across European ETF markets.  

The post Euronext launches first fully integrated marketplace for European ETFs and ETPs appeared first on The TRADE.

FalconX launches new platform to enable 24/7 institutional OTC options trading

The offering aligns with a growing interest in 24/7 trading across the industry in recent months, with exchanges such as Nasdaq and Cboe Global Markets unveiling plans to extend US equities trading hours. 

The post FalconX launches new platform to enable 24/7 institutional OTC options trading appeared first on The TRADE.


Google is blocking AI searches for Trump and dementia

Google appears to have blocked AI search results for the query "does trump show signs of dementia" as well as other questions about his mental acuity, even though it will show AI results for similar searches about other presidents. When making the search about President Trump, AI Overviews will display a message that says, "An […]

TikTok, #freedom edition

Hello and welcome to Regulator. Today is the last day of The Verge's very good subscription sale: $4 for a month and $35 for the year, for full access to the entire site. Don't delay! When we launched Regulator two months ago, the premise was that I'd write about the collision between Big Tech and […]

Refurbished Sonos headphones, speakers, and soundbars are up to 25 percent off right now

With the impact of President Trump’s tariffs leading to higher prices on Sonos gear, scoring a deal feels extra special. Right now, Sonos is currently offering up to 25 percent off a range of refurbished devices, including the Sonos Era 100, Era 300, and the portable Move 2, with prices starting at $134. If you […]

You can now preorder LG’s 6K 32-inch Thunderbolt 5 display for $2,000

LG first announced its new 32-inch UltraFine monitor at CES 2025. The company still bills it as the “world’s first 6K monitor with Thunderbolt 5 connectivity” built right in. Ahead of the display launching in South Korea and Japan in September, followed by a US launch next month, LG has shared more specs and pricing […]

Imgur is blocking users in the UK

The image-sharing site Imgur has shut off access to UK users after the country’s data watchdog warned the platform of a fine, as reported earlier by the BBC. In a post on Imgur’s help page, Imgur confirms that users in the UK can no longer log in, view content, or upload images starting September 30th, […]

Microsoft is giving Copilot AI faces you can chat with

Microsoft is trying to make Copilot more approachable by giving the AI assistant an animated face to talk with. The experimental “Portraits” feature in Copilot Labs is currently available in the US, UK, and Canada, and provides 40 stylized human avatars that respond with natural expressions during real-time voice conversations. Announcing the new feature on […]

Beats redesigned its new Powerbeats Fit’s wing tip to be more comfortable and secure

Beats has announced a new version of the Fit Pro wireless earbuds that launched in late 2021, which are now called the Powerbeats Fit. The most notable upgrade is a redesigned wing tip that Beats says is 20 percent more flexible, improving comfort while also keeping the earbuds more securely anchored in your ear. The […]

OpenAI’s new social video app will let you deepfake your friends

OpenAI has a new version of the Sora AI video generator that it launched at the end of last year, and it’s arriving today alongside a new social video app, also called Sora, for iPhones. The currently invite-only app resembles TikTok with a feed of videos you can shuffle through. But instead of encouraging people […]

Microsoft’s Windows 11 2025 update is available now

Microsoft is rolling out its annual Windows 11 update (known as version 25H2) today. After testing the update in the Windows Insider Release Preview ring last month, version 25H2 is now starting to roll out to all Windows 11 users through Windows Update. The 25H2 update isn’t a huge one, so it won’t take long […]

Apple’s M5 iPad Pro might have leaked in Russia

Two Russian YouTubers have posted videos unboxing what appears to be an unannounced iPad Pro with an M5 chip. Rumors have indicated that Apple could launch an M5 iPad Pro as early as October, and these videos may be our first look at the actual product. Externally, this new M5 iPad Pro doesn’t appear to […]


Can I Trust AI to Give Me Good Travel Advice?

Orit Ofri thought she could trust AI to give her travel advice for a recent trip to Paris.

The Future is Transparent: 3 Shifts in GenAI Explainability and Self-Justifiability

Why the focus is rapidly moving from post-hoc fixes to intrinsically interpretable and self-justifiable models.

We’ve built the most powerful tools in history, but we can’t always see how the magic happens.

The Chef in the Black Box

Picture this. You’re in the world’s most advanced hospital. A patient is critical, and the new AI super-doctor, “GPT-Cure,” analyzes a mountain of data and instantly prescribes a novel, life-saving treatment protocol. The human doctors are stunned. It’s brilliant. But before they administer it, they ask a simple, career-saving question:

“Okay, why this treatment? What’s your reasoning?”

GPT-Cure just… whirs. Its screen remains blank. It has given you the what, but it can’t give you the why.

This, my friend, is the trillion-dollar problem at the heart of the AI revolution. We’ve built the most powerful tools in human history, but they are fundamentally “black boxes.” Their genius comes from a complexity so vast that even their creators can’t fully peek inside to see how the magic happens.

The AI is a brilliant chef, but it’s cooking inside a black box. You can’t trust the dish if you can’t see the kitchen.

Think of it like a brilliant but silent Michelin-starred chef. He creates a masterpiece dish that could win awards. You can taste the result, and it’s divine. But you have no idea what ingredients he used, what steps he followed, or if he washed his hands. You can’t replicate it, you can’t debug it if someone gets sick, and you can’t be sure it’s safe for the person with a deadly peanut allergy.

This opacity isn’t just a quirky feature; it’s a direct barrier to trust, safety, and accountability. And the global response to this challenge — the field of Explainable AI (XAI) — is undergoing a massive transformation. We’re in the middle of three seismic shifts that are changing the game from just explaining a decision after the fact to building AI that is transparent and accountable from the ground up.

Let’s stir the pot and see what’s cooking.

“The great enemy of knowledge is not ignorance, it is the illusion of knowledge.” — Stephen Hawking

The Stakes: The Health Inspector is Coming

So, why is everyone suddenly in a panic about our silent chef? For a while, we were happy just eating the fancy food. But now, the stakes have been raised to the moon.

The regulators have arrived. The Wild West days of AI are over, and accountability is now on the menu.

1. The Regulatory Pressure Cooker: The days of the Wild West of AI are over. The grown-ups have entered the room, and they’re carrying clipboards. Frameworks like the EU AI Act are putting legal teeth into the demand for transparency. They’re not just asking about the model’s logic; they’re demanding to see the entire supply chain — the training data, the intended purpose, the limitations (Gyevnar et al., 2023). An opaque AI is rapidly becoming a compliance and liability nightmare waiting to happen. The health inspector doesn’t care how good your soup is if you can’t prove it’s not poisoned.

2. The High-Stakes Deployment Barrier: We want to use GenAI for the big stuff: diagnosing diseases, drafting legal arguments, managing financial markets, designing critical infrastructure. But deploying a system you don’t understand in these fields isn’t just risky; it’s professionally negligent. Would you trust a bridge designed by an AI that can’t “show its work” on the physics calculations? The lack of clear, defensible reasoning is the single biggest barrier preventing GenAI from becoming a trusted professional tool instead of a fascinating novelty (Ji et al., 2023).

3. The Eroding Trust Ecosystem: We live in an age of deepfakes and rampant misinformation. If we can’t trace the provenance of AI-generated articles, images, or code, how can we trust anything? Our entire information ecosystem is at risk. Without robust accountability, the line between fact and “plausible hallucination” blurs into non-existence, poisoning the well of public trust.

ProTip: When using a GenAI tool for any serious work, always operate with “professional paranoia.” Ask yourself: If this AI is wrong, what is the worst-case scenario? Then, work backward to verify its claims using trusted, independent sources. Never let “the model said so” be your final answer.

Shift 1: From Kitchen Taster to Kitchen Architect

The first attempt to understand our silent chef was, logically, to hire tasters.

The Old Way: Explanations as an Afterthought

The first wave of XAI gave us tools like LIME and SHAP. These are brilliant post-hoc techniques. Essentially, they work from the outside in. After the chef has made the dish, these “tasters” poke and prod it, trying to figure out what ingredients were most important. They might say, “I’m detecting strong notes of saffron and a hint of paprika, which likely contributed to the final flavor profile.”

This is like trying to understand why a car crashed by only looking at the skid marks on the road. It’s clever, it gives you some clues, but you have no idea what was actually happening inside the engine when things went wrong. These methods give you an approximation of the model’s reasoning, but it’s not always a faithful one.

The New Paradigm: Transparency by Design

The new shift is revolutionary. Instead of trying to guess what’s in the dish, we’re now redesigning the kitchen to have glass walls. We’re moving toward building models that are intrinsically interpretable.

We’re moving from AI archaeologists, digging through the ruins of a decision, to AI architects, designing transparent systems from the ground up.

This is where the real fun begins. Researchers are now performing the AI equivalent of neurosurgery. In a landmark study called GAN Dissection, scientists literally went inside an image-generating AI to find the exact “neurons” that corresponded to real-world objects (Bau et al., 2019). They found the cluster of neurons responsible for “trees.” How did they know? Because they turned those neurons off, and poof — the trees vanished from the pictures. They turned them on, and trees appeared. This is a direct, causal link. It’s not guessing; it’s seeing the wiring.

Another brilliant approach is creating Concept Bottlenecks. This forces the AI to think in human-understandable terms before spitting out an answer. Imagine a medical AI analyzing a skin lesion. Instead of just jumping to “95% chance of malignancy,” a concept bottleneck model is forced to first conclude:

  1. Feature A: Asymmetrical Shape — True
  2. Feature B: Irregular Borders — True
  3. Feature C: Varied Color — True Therefore, my conclusion is…

This makes its reasoning process transparent and verifiable for a human expert (Yu et al., 2025). We’re moving from being AI archaeologists, digging through the ruins of a decision, to being AI architects, designing transparent systems from the blueprint up.

Trivia: The term “neuron” in a neural network is just a metaphor! It’s a mathematical function, not a biological cell. But researchers in “mechanistic interpretability” are finding that these functions can sometimes organize themselves to represent concepts in a way that’s eerily similar to how we think brains might work.

Shift 2: From “Here’s My Brain Scan” to “Here’s My Homework”

So, we built the glass kitchen. We can now see every neuron fire, every calculation whir. We give this incredibly detailed printout to the human doctor. Problem solved, right?

Wrong. Dangerously wrong.

The Sobering Reality: Explanations Can Backfire

A groundbreaking study delivered a gut punch to the XAI community. Researchers found that giving users detailed technical explanations often didn’t help them make better decisions. In fact, it frequently made things worse by creating “automation bias” (Bansal et al., 2020). People saw the complex, sci-fi-looking chart and thought, “Wow, this thing is smart!” and then proceeded to over-trust the AI, even when its advice was dead wrong.

It’s like this: if you’re trying to check a mathematician’s proof, a neuroscientist showing you an fMRI of the mathematician’s brain isn’t helpful. What you need is for the mathematician to show you their work on the blackboard, step-by-step.

The New Goal: Justifiability over Explainability

This brings us to the most important shift in mindset. For high-stakes domains, the goal is no longer just explainability; it’s justifiability.

We don’t need to see the AI’s brain scan. We need it to show us its homework.

We don’t need a printout of the AI’s “brain activity.” We need the AI to defend its conclusion in a language we can all understand and scrutinize. As one brilliant paper argues, a legal AI shouldn’t just say, “The defendant is liable.” It must justify this conclusion by citing specific case law, pointing to relevant statutes, and quoting evidence from the provided documents (Wehnert, 2023).

It needs to show its homework.

This reframes everything around the user. It aligns AI accountability with the standards of evidence that have governed law, medicine, and science for centuries. In my old cybersecurity days, we had a saying: “In God we trust; all others must bring data.” For AI, the new mantra is: “In models we test; all others must justify their claims.”

“The first principle is that you must not fool yourself — and you are the easiest person to fool.” — Richard P. Feynman

Shift 3: From Inspecting the Chef to Auditing the Entire Supply Chain

For years, we’ve been obsessed with the chef — the model itself. We analyzed its every move in the kitchen. But we missed the most important question:

Where did the ingredients come from?

The Old Blind Spot: The Model in a Vacuum

An AI model is a product of its training data. A model trained on a biased, toxic, or factually incorrect dataset will, unsurprisingly, produce biased, toxic, or incorrect outputs. Focusing only on the model’s logic at the moment of decision-making is like blaming the oven for a cake that tastes terrible when you used salt instead of sugar.

The New Frontier: Auditing the Entire AI Lifecycle

The final and most expansive shift is to zoom out and apply transparency to the entire ecosystem, especially the data supply chain.

A model is only as good as its training data. The new frontier is auditing the entire data supply chain, from source to synthesis.

This is where things get really “Inception”-like. We are now using GenAI to create massive amounts of synthetic data to train other AIs. This creates a terrifying risk of a feedback loop, where biases and errors are amplified with each generation. It’s like making photocopies of photocopies — the quality degrades until you’re left with a distorted mess.

Groundbreaking new research is developing methods to audit an AI model or a dataset to determine if it was trained on AI-generated data, even without seeing its internal code (Wu et al., 2025). This is a “data forensics” capability. It’s like creating a test to tell if your “farm-to-table” vegetables were grown in a field or 3D-printed in a lab. Knowing the provenance of your data is fundamental to maintaining information integrity.

ProTip: Before you trust a new AI model, look for its “Model Card” or “Datasheet” (Mitchell et al., 2019; Gebru et al., 2021). These are transparency documents that should describe what data the model was trained on, its intended uses, and its known limitations. If a vendor can’t provide one, that’s a major red flag.

The Reality Check: This Stuff is Hard

Now, before we get too carried away, let’s take a breath. This journey toward transparency isn’t all sunshine and rainbows.

True transparency at the scale of today’s foundation models is a monumental engineering challenge.

  • The Scalability Challenge: Many of the coolest mechanistic interpretability techniques work on smaller models. Dissecting a model with a few million parameters is one thing; doing it for a foundation model with trillions is like trying to map the wiring of a human brain one synapse at a time. It’s a monumental challenge.
  • The “Force of Nature” Debate: Can we ever fully explain these systems? Some researchers argue that at a certain complexity, AI might become more like the weather. We can predict it, manage its risks, and build shelters, but we can’t explain the movement of every single water molecule in a hurricane (Nakao, 2025). This suggests we need to focus as much on robust risk management as we do on perfect explanation.
  • The Quest for a Unified Framework: Right now, XAI is a bit like a collection of specialized tools. The tool for explaining an image generator is different from the one for a language model. The field is still searching for the “Swiss Army knife” — universal principles of explainability that apply everywhere.

The Path Forward: Your marching orders

So, what does this all mean for you?

  • For Policymakers & Regulators: Your definition of “transparency” must evolve. Stop focusing only on the algorithm. Demand accountability for the entire data lifecycle. Mandate data provenance reports and the right to audit synthetic data ecosystems, just as the EU AI Act is beginning to do.
  • For Executives & Strategists: Change the questions you ask your AI vendors. Don’t just ask, “Is your AI explainable?” That’s a meaningless yes/no question. Ask, “Is it justifiable? Can it cite its sources? Can you produce an audit trail for the training data that will stand up to scrutiny in our industry?” Demand the recipe, not just a free sample.
  • For Researchers & Developers: The future is in building glass kitchens, not designing better keyholes for black boxes. Prioritize research in intrinsically interpretable architectures, human-centric evaluation benchmarks, and robust data auditing tools. A post-hoc fix for an opaque system will increasingly be seen as a temporary patch, not a long-term solution.

The Post-Credits Scene

The conversation around AI transparency has finally grown up. We are moving past the simplistic desire to “open the black box” and toward a sophisticated, multi-layered strategy for building trust.

The three shifts — from post-hoc fixes to intrinsic design, from technical explanations to human-centric justification, and from a narrow model-centric view to a broad ecosystem-wide audit — are the pillars of this new era. They are our best hope for building a future where generative AI isn’t just a powerful and mysterious oracle, but a reliable, safe, and accountable partner in solving humanity’s biggest challenges.

Now, who wants more tea? The next pot is brewing.

References

Shift 1: Intrinsic Interpretability & Mechanistic Understanding

  • Bau, D., Zhu, J.-Y., Strobelt, H., Zhou, B., Tenenbaum, J. B., Freeman, W. T., & Torralba, A. (2019). GAN Dissection: Visualizing and Understanding Generative Adversarial Networks. In International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1811.10597
  • Jahanian, A., Chai, L., & Isola, P. (2020). On the “Steerability” of Generative Adversarial Networks. In International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1907.07171
  • Yu, Z., et al. (2025). Interpretable Generative Models through Post-hoc Concept Bottlenecks. arXiv preprint. https://arxiv.org/abs/2503.19377
  • Zhang, R., Eslami, S. M. A., & D’Souza, F. R. (2022). Diffusion Visual Counterfactual Explanations. In Advances in Neural Information Processing Systems (NeurIPS). https://arxiv.org/abs/2210.11841

Shift 2: Human-Centricity, Justification, & Legal Frameworks

  • Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., Ribeiro, M. T., & Weld, D. S. (2020). Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. arXiv preprint arXiv:2006.14779. http://arxiv.org/pdf/2006.14779v3
  • Gyevnar, B., Ferguson, N., & Schafer, B. (2023). Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act? arXiv preprint arXiv:2302.10766. http://arxiv.org/pdf/2302.10766v5
  • Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain, D., Perez, E., Schurr, N., DasSarma, N., McMain, E., Kaplan, J., Amodei, D., & McCandlish, S. (2022). Language Models (Mostly) Know What They Know. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://arxiv.org/abs/2207.05221
  • Wehnert, S. (2023). Justifiable Artificial Intelligence: Engineering Large Language Models for Legal Applications. arXiv preprint arXiv:2311.15716. http://arxiv.org/pdf/2311.15716v1

Shift 3: Ecosystem Audits & Data Provenance

  • Hase, F., et al. (2024). Multi-Level Explanations for Generative Language Models. arXiv preprint arXiv:2403.14459. https://arxiv.org/abs/2403.14459
  • Wu, Y., Yang, Z., Shen, Y., Backes, M., & Zhang, Y. (2025). Synthetic Artifact Auditing: Tracing LLM-Generated Synthetic Data Usage in Downstream Applications. arXiv preprint. http://arxiv.org/pdf/2502.00808v1

Foundational Surveys & Benchmarks

  • Ahmed, S. Q., Ganesh, B. V., P, J. B., Selvaraj, K., Devi, R. N. P., & Kappala, S. (2025). BELL: Benchmarking the Explainability of Large Language Models. arXiv preprint. http://arxiv.org/pdf/2504.18572v1
  • Nakao, Y. (2025). Accountability of Generative AI: Exploring a Precautionary Approach for “Artificially Created Nature”. arXiv preprint. http://arxiv.org/pdf/2505.07178v1
  • Zhao, H., Chen, H., Yang, F., Liu, N., Deng, H., Cai, H., … & Du, M. (2023). Explainability for Large Language Models: A Survey. arXiv preprint arXiv:2309.01029. http://arxiv.org/pdf/2309.01029v3

Disclaimer: The views and opinions expressed in this article are my own and do not necessarily reflect the official policy or position of any past or present employer. This article was drafted with the assistance of generative AI, which was used for research, summarization, and brainstorming. The images in this article were generated using AI. This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License (CC BY-ND 4.0).


The Future is Transparent: 3 Shifts in GenAI Explainability and Self-Justifiability was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

From Knowledge to Power: How AI Is Reshaping the World

A journey through the innovations that are rewriting information, work, creativity, and politics.

Image generated by author

Introduction

The journey you are about to embark on is a map of the near and distant future of artificial intelligence. Not an abstract list of possibilities, but a trajectory that begins with what is already before our eyes — search engines becoming assistants, apps dissolving into personal agents — and extends to the most radical scenarios, where AI could even assist governments and political systems in decision-making.

The chapters organize the content as a temporal countdown. Throughout this journey, you’ll find a common thread: how software is becoming increasingly autonomous, intelligent, and capable of shaping experiences, markets, and institutions.

A key clarification: this article isn’t about robotics. We won’t be covering mechanical arms, automated assembly lines, self-driving cars, or drone swarms. That’s a parallel chapter in the technological revolution — worthy of its own analysis, and perhaps a dedicated guide in the future.

The focus of this work is instead on the invisible heart of transformation: software. Not the bodies of machines, but their minds. The digital agents that live in our phones, in company servers, and in the services we use every day. Systems capable of collecting data, analyzing it, making decisions, and even generating content.

This intangible yet powerful space will host the most important game: the one concerning access to information, financial markets, creativity, health, and politics. This unseen terrain, lacking physical form, can profoundly alter our daily lives, institutions, and even the values ​​upon which society is based.

1. From Research to Answer: How AI Agents Are Changing Access to Information

We’re witnessing a sea change: the traditional list of search results could soon be a thing of the past. We’ll no longer have to open dozens of links and manually compare information; AI agents will do it for us. These systems will perform searches, retrieve relevant content, analyze it, and return a clear, targeted summary. It’s the shift from “searching for pages” to “searching for answers,” with a radical impact on the user experience.

Image generated by author

A concrete scenario

Imagine Sonia, a university student. She has to write an essay on the climate impact of meat production. A traditional search would inundate her with hundreds of scientific articles, blogs, institutional reports, and conflicting opinions. The hardest part wouldn’t be reading, but sorting through them: figuring out what’s reliable, what’s up-to-date, and what’s contradictory. With her AI-powered search engine, Sonia no longer starts with an endless list of links. Instead, she receives a streamlined overview: the latest emissions data, academic sources ranked by reliability, and concise infographics ready to be dropped straight into her essay. The agent also shows her the sources’ provenance and the margins of uncertainty, allowing her to decide how much to trust. The most surprising step, however, is another: AI ​​also offers counterarguments. Alongside data highlighting the environmental impact, it points out studies showing how innovative techniques — such as lab-grown meat or water-efficient supply chains — can mitigate part of the problem. Sonia doesn’t just summarize, she constructs a critical and balanced essay, closer to true academic reasoning than a simple collage of sources. For her, research is no longer a dispersive process, but an ongoing dialogue with an assistant who filters, organizes, and stimulates critical thinking.

Socio-economic consequences

This new paradigm challenges the advertising model that has supported search engines for decades: if there are no longer links to click, how will the visibility of content and the online information market change? Publishers and content creators face a crucial challenge: being “read” and valued by AI agents. Those who adapt will connect with audiences in new, direct ways — while those clinging to old models risk fading into the background of machine-generated summaries.

Examples

The transformation is already underway. Google has introduced Gemini and AI Overviews in its search results; Microsoft has integrated Bing Chat, and new engines like Perplexity and You.com have emerged. All these products adopt a conversational approach in which queries no longer return a list of links, but a summary response with cited sources. They are still hybrid systems, maintaining the old logic alongside the generative one, but clearly indicating the direction of change.

Time horizon

0–2 years — Multimodal searches (text, images, voice) will become standard in the main engines. Already today, over 20% of Google users declare having interacted at least once with Gemini/AI Overviews (Statista, 2024). The main limitation remains accuracy: hallucinations, delays in updating datasets, and copyright risks prevent a fully “blind” adoption. Indicators: the share of queries with generative answers compared to traditional SERPs and the number of publishers who choose to be indexed directly by agents.

3–5 years — AI agents will become capable of conducting complex and continuous research, such as long-term monitoring or cross-sectional comparative analyses. Within this window, 30–40% of global searches could be managed in an “answer-first” manner (McKinsey, 2024). Risks: inference costs are still high for long and multimodal queries, and regulatory resistance related to the transparency of sources and the economic impacts on publishers.

5–10 years — The traditional list of links could disappear entirely, replaced by organic and personalized results with adaptive interfaces. Assistants will become permanent and proactive: they will monitor interests, anticipate information needs, and provide real-time alerts. Risks: Concentration of power (a few global providers controlling access to knowledge) and loss of information pluralism. Regulators, especially in Europe, could impose visibility quotas or extensive citation requirements to preserve a balanced ecosystem.

2. No more product and service comparisons, but tailor-made advice

The search for and selection of products and services are changing. Until now, we’ve relied on specialized sites that allow us to compare items based on technical specifications and price: phones, cars, and electronic components. These tools work well in some sectors, but they don’t cover the entire range of available goods. If today we wanted a detailed comparison of dietary supplements, cosmetics, or artisanal products, we’d be hard-pressed to find dedicated platforms.

With agentic AI, this limitation could disappear. Thanks to the combination of search, extraction, and analysis, intelligent agents will create personalized comparison tables in real time, even for categories that currently lack dedicated portals. In just a few seconds, what would take an expert hours of work can be summarized into clear, dynamic tables, tailored to the needs of each user.

Image generated by author

A concrete scenario

Marco needs to buy a dietary supplement, but he’s inexperienced and gets lost among dozens of labels and conflicting reviews. Until now, he would have consulted unreliable forums or blogs, or e-commerce sites with promotional descriptions. With his AI product comparison tool, however, he only needs to ask one question: “Which vitamin D supplement is best for a 40-year-old man who spends little time in the sun and does light exercise?” In just a few seconds, he gets a clear comparison: a personalized table with active ingredients, dosages, average prices, verified reviews, and even alerts on potential side effects. What’s more, the agent identifies the safest products based on clinical data and directs him to the most reliable retailers. Marco no longer has to navigate through ten product pages: he finds the right answer, ready to use.

Socio-economic consequences

For e-commerce, this is a radical change. It will no longer be the user who has to navigate dozens of storefronts, but the agent who will bring only the most suitable solutions to the user’s attention. Brands will therefore be under increasing pressure to ensure data quality and transparency. Incomplete descriptions, unreliable reviews, or inaccurate specifications could result in AI being excluded from the selection process. The competition will therefore shift from flashy marketing to robust and verifiable information.

Examples

The current landscape is dominated by vertical comparators such as Versus, Kimovil, and the PCPartPicker, which offer very detailed comparisons but remain confined to their respective categories. There are also AI demos such as GravityWrite, capable of generating product comparisons, but they are experimental tools, designed for a professional audience rather than for consumer use. In other words, there is not yet a universal smart comparator that covers all product categories transversally: this is precisely where agentic AI promises true disruption.

Time horizon

0–2 years — Traditional vertical comparators will continue to dominate, supported by a global market estimated to be over 20 billion dollars(Allied Market Research, 2023). Consumer AI for product comparison will remain mostly experimental demos or internal tools within e-commerce platforms. Limitations: difficulty in ensuring the accuracy of the data collected, poor API integration in the most fragmented sectors (e.g., cosmetics, nutraceuticals). Risks: lack of shared standards and possible legal disputes over liability for comparisons.

3–5 years — The first ones will begin to emerge intelligent comparators intended for the general public, capable of collecting data from heterogeneous sources and generating personalized dynamic sheets in real time. Users will be able to ask “which cream is best for my skin and my budget?” and receive not only a table, but also contextual recommendations. Indicators: growing share of product searches (20–30%) filtered by generative systems integrated into engines such as Google and Amazon. Limitations: risk of hallucinations (non-existent products or incorrect combinations), slow data collection from sources without APIs. Risks: resistance from companies to make prices and complete technical data sheets transparent, which can limit the database.

5–8 years — Classic comparators risk becoming marginal: autonomous personal agents not only prepare the comparisons, but they can also make purchases directly on behalf of the user, choosing based on explicit preferences and behavioral history. Indicators: over 50% of purchasing decisions online are influenced by personal AI agents (McKinsey, 2030 scenario). Limitations: complexity in managing trust — the user will have to understand whether the agent is acting in his or her best interests or in that of the provider. Risks: antitrust regulation and algorithmic transparency (the EU and the US could impose stringent constraints on the opacity of recommendation systems).

3. Beyond apps: the arrival of multimodal personal assistants

For over a decade, apps have mediated our relationship with technology: icons on phones or PCs, each with a specific function. But this paradigm is set to change. Multimodal always-on personal assistants promise to become the new primary interface: agents that live simultaneously on smartphones, PCs, smartwatches, headphones, or AR glasses, capable of understanding context (voice, location, screen viewed, even physiological state) and completing complex tasks from start to finish. No more than ten different apps to book a flight, fill out a form, or request a refund: a single assistant orchestrates everything, communicating with the systems on our behalf.

The difference compared to the voice assistants of the past (Siri, Alexa, Google Assistant) is radical. They could only execute simple commands. Imagine: “Book me a flight to Milan the day after tomorrow morning, the cheapest one that fits my schedule.” The agent not only searches and compares, but also cross-references commitments, calculates travel times, fills in payment details, sends the receipt to accounting, and adds the reservation to the calendar.

Image generated by author

A concrete scenario

Lucia, a manager at a multinational company, wakes up at 6:30 a.m. She doesn’t pick up her smartphone: the personal assistant integrated into her smartwatch has already prepared her day. It has monitored her sleep quality, synced her schedule with her calendar, calculated traffic, and suggested moving a meeting by half an hour to avoid delays. During breakfast, Lucia simply says, “Find the cheapest flight to London on Friday that works for my schedule.”The assistant not only books the ticket, but also automatically adjusts reminders, adds the reservation to the shared calendar, sets the alarm earlier that day, and suggests a taxi ride at the optimal time. Later, while she’s driving, the AR headset reads a document to her and, at her nod, transforms it into a bullet-point summary to send to colleagues. In the evening, the assistant detects that Lucia is tired and suggests rescheduling a non-urgent call for the next day. In this daily routine, Lucia no longer interacts with individual apps but with a single agent that orchestrates devices, services, and decisions, becoming a true digital extension of her mind.

Socio-economic consequences

The arrival of always-on personal assistants will radically change our relationship with digital services. For consumers, it will mean greater convenience, less wasted time, and seamless access to information without navigating a thousand interfaces. For companies, it will be a shock: competition will no longer be between apps, but to be “chosen” by the agent. This will require transparency in pricing and terms, reduce customer lock-in, and push toward API-based models and interoperability. From an ethical perspective, significant risks arise: if a single agent filters all our digital decisions, who can guarantee it’s acting in our best interests and not those of the provider that develops it? And what happens to privacy if an entity collects and connects every fragment of our daily behavior?

Examples

Some signs are already visible. OpenAI GPT-4o and ChatGPT with memory. They represent the first multimodal agents that combine voice, text, and images and can be integrated into mobile devices. Samsung Galaxy AI and Google Gemini Nano bring similar functions directly to smartphones with on-device models. Devices such as the Humane AI Pin and the Rabbit R1 try to embody the idea of ​​an always-on assistant, while remaining immature in terms of usability and diffusion. Even the wearables like the Apple Watch begin to incorporate predictive health features and AI-powered contextual assistance.

Time horizon

0–2 years — Multimodal assistants will begin handling simple end-to-end tasks — booking travel, managing calendars, filling out basic forms — and will be integrated into consumer devices like smartphones and smart speakers. Indicators: diffusion of multimodal functions in models such as Gemini Nano, the Apple Intelligence, already pre-installed on millions of devices by 2025–26. Limitations: high latency for complex processes, need for continuous connectivity, and limited contextual memory capacity.Risks: resistance related to the privacy of personal data and difficulties in regulating informed consent.

3–5 years — The daily use of multimodal assistants will become mainstream. Integrated into major operating systems and wearable devices, they will orchestrate activities across work, private life, and interactions with public and private services. Indicators: over 40% of voice searches and device-to-service interactions mediated by AI agents (Gartner estimate, 2027 scenario). Limitations: risk of contextual errors (wrong flight choices, unresolved scheduling conflicts), dependence on closed ecosystems (Google, Apple, Microsoft). Risks: possibility that dominant providers will limit interoperability, creating lock-in and holding back truly universal adoption.

5–8 years — The very concept of the “app” could dissolve, replaced by a model centered on intelligent agents that know us, track us, and act for us. Users will interact with services through personal agents that mediate access, creating a new economy based on“competition for the agent” more than for the app. Indicators: over 60% of personal digital transactions managed by AI assistants (McKinsey, 2030 scenario). Limitations: complexity in ensuring trust and transparency in choice algorithms, risk of personalized biases that reinforce habits that are not advantageous for the user. Risks: EU/US regulations on the opaqueness of decision-making systems and the concentration of power in the hands of a few providers. Some analysts (e.g., Shoshana Zuboff) warn that the model risks strengthening an even more pervasive surveillance capitalism, making the agent more loyal to the platform than to the user.

4. The dream of the web of data comes back to life

In 2001, Tim Berners-Lee, inventor of the World Wide Web, proposed the idea of ​​the semantic web: an internet capable of teaching machines to understand natural language and extract unambiguous information. The goal was to transform the web into a vast “web of data,” where products, services, and knowledge could be described in a structured form, aggregated, and combined to generate new insights. The project never fully took off: there was a lack of financial incentives, and without a critical mass for adoption, the idea remained unfinished.

But today, AI agents can finally make that dream come true. Intelligent agents can extract structured data from text content and transform it into queryable information. This means that different sources can be collected and cross-referenced in real time: from booking a food and wine tour, combining winery, museum, and hotel opening hours, to more complex systems that integrate heterogeneous sources with no dedicated APIs.

Image generated by author

A concrete scenario

Marta is an architect designing a small, zero-impact building. She needs to know which suppliers in her region offer certified materials, what the average delivery times are, and what public incentives are available in her municipality. Today, she would have to spend days combing through ministerial PDFs, regional websites, and local company web pages, each with tables, regulations, and brochures written in different formats. With the new AI agent, however, Marta asks just one question: “What eco-certified materials can I purchase near me, quickly, with available incentives?”The agent visits company websites, reads PDF regulations, interprets ministerial press releases, and even municipal information posts, transforming everything into a single, clear, and verifiable table. No APIs are required, and the data doesn’t need to be published in an open format: the agent extracts, normalizes, and connects it instantly. In just a few minutes, Marta has a purchasing plan that integrates availability, prices, certifications, and tax contributions. A task that previously required weeks of manual research becomes an immediate and personalized process.

Socio-economic consequences

The difference compared to the past lies in the incentives. The use of intelligent agents reduces operating costs, simplifies access to services, and opens new monetization channels for providers. This creates a virtuous circle that could finally lead to the emergence of a true shared data ecosystem. For companies, it will mean greater efficiency and new business opportunities; for citizens, more transparent and immediate access to reliable information. But challenges will also emerge: data quality, preventing biases introduced by agents, and the risk of power being concentrated in the hands of those who control the aggregation tools.

Examples

Some projects already recall the spirit of the semantic web. Google Data Commons integrates large amounts of public data into a single queryable graph; DBpedia extracts structured information from Wikipedia and links it as Linked Data; Semantic Media Wiki allows content to be enriched with metadata; in specific sectors, such as climate and energy, semantic knowledge graphs are being created for research and policy. At the same time, we will see the first experiments of an agentic web where AI agents transform unstructured data into coherent, navigable information.

Time horizon

0–2 years — These applications will remain vertical and niche: experimental dashboards, mashups limited to structured datasets, and tools used by SMEs or research centers. The main bottlenecks will be the quality of available data and the interoperability: Many sources don’t offer common APIs or standards, forcing manual scraping or conversions. Indicators: number of open-access datasets made available under machine-readable licenses and diffusion of AI-driven data visualization tools in SMEs.

3–5 years — Intelligent agents will automate data mashup on a larger scale, combining heterogeneous sources (finance, health, environment, consumption). Within this window, the OECD Digital Outlook 2024 expects 30% of European SMEs to adopt AI tools for data analysis and visualization. Risks: lack of shared standards across platforms, risk of bias in incomplete or manipulated datasets, and still high costs for complex queries and real-time updates.

5–10 years — We could witness the birth of real consumer services, capable of generating dynamic and personalized visualizations upon user request. Digital assistants will become capable of connecting public and private sources in real time, building interactive knowledge tailored to individual preferences. Risks: market concentration in the hands of a few global providers, increasing reliance on proprietary datasets, and regulatory challenges to ensure transparency in data sources.

5. Education: from universal tutor to learning companion

Education is one of the fields where AI can have the most transformative impact. Intelligent tutors are already emerging as tools capable of guiding students step by step, generating targeted exercises, explaining the same concept in different styles, and adapting to their individual pace. Unlike traditional e-learning platforms, these new learning agents are interactive, multimodal, and capable of building a truly personalized learning path. They don’t just check whether an answer is right or wrong, but offer targeted feedback, study options, and ongoing assessments that inform customized plans.

This transformation overturns the paradigm of standardized education: schools today must be organized into homogeneous classes, with the same time and content for everyone, but tomorrow, learning could become deeply individualized. An AI tutor could explain algebra with football examples to sports enthusiasts or with musical metaphors to instrument players, ensuring a deeper and more motivating understanding.

Image generated by author

A concrete scenario

Luca, 14, struggles with math but loves soccer. When he returns home from school, he opens his AI tutor. Instead of a standard assignment, the agent presents him with an algebra problem modeled after calculating the scores of a Champions League match. When Luca fails a pass, the tutor doesn’t just point out the error: it guides him step by step, offering alternative explanations, interactive graphics, and even short simulations. His sister Giulia, meanwhile, is learning English. The AI ​​tutor presents her with an interactive dialogue set in a London restaurant, adapting the level of the sentences to her pronunciation and the speed with which she responds. When she encounters a difficulty, the system provides suggestions in real time, just like a private tutor would. For parents, the tutor generates a weekly report: it shows progress, gaps, and targeted advice, helping families understand how to support their children. For schools, the same data becomes a support for teachers, who can personalize classroom lessons rather than following a rigid, uniform curriculum.

Socio-economic consequences

The widespread adoption of AI tutors brings enormous opportunities but also significant risks. On the positive side, it could democratize access to quality education: students in countries or communities with few qualified teachers could receive ongoing, personalized support. It could also reduce the gap between those with access to private tutoring and those without. For teachers, AI will not replace their role, but will relieve them of some of the repetitive burden (testing, marking), allowing them to focus on empathy, motivation, and the development of critical thinking.

Risks include reliance on private platforms, with privacy concerns regarding student data, bias in generated content, and the possibility of systems becoming tools for standardization rather than personalization. Economically, the growth of educational AI will fuel a new global edtech market, based on subscriptions, institutional licensing, and integration into school systems.

Examples

Concrete signs are already visible. Khan Academy launched Khanmigo, an AI tutor that guides the student with Socratic questions and personalizes learning. Duolingo Max integrates GPT for dynamic language exercises and tailored explanations. Socratic, a Google app, explains school problems step by step. In China, Squirrel AI is pioneering the use of adaptive AI to build truly personalized educational plans.

Time horizon

0–2 years — We will see serious pilot projects, especially in extracurricular activities (online tutoring, language courses, professional training). We will use AI tutors as complementary support, never as a replacement, with strong human supervision. Indicators: share of schools launching official trials (<10% in the EU according to OECD Education 2024), number of platforms obtaining educational certifications. Limits: inconsistent accuracy, risk of hallucinations, need for moderation by human teachers/tutors.

3–5 years — The use of large-scale AI platforms will become a reality, especially in global extracurricular ecosystems(MOOCs, edutech, corporate courses). Within this window, HolonIQ (2024) estimates that over 40% of worldwide students will regularly use an AI tutor. Risks: trade union and cultural resistance, inequalities in access (gap between rich and poor schools), and infrastructure costs in emerging countries.

5–10 years — Full integration into official school systems could transform teaching: from rigid and standardized programs to dynamic and personalized paths for each student. AI assistants will become an integral part of the ministerial platforms, ensuring continuous monitoring and adaptive curricula. Risks: excessive dependence on a few global platform providers, loss of pedagogical autonomy for teachers, and the need for strong regulations on privacy and sensitive data of minors.

6. When software will write itself

Today, agents already exist that can build simple, quick-to-implement, and surprisingly effective web or mobile applications. But this is just the beginning: the level of complexity these systems will be able to handle is destined to grow rapidly. Eventually, we will be able to entrust the entire lifecycle of a software project to AI: from requirements gathering to design, from front-end and back-end development to testing and production deployment.

Image generated by author

A concrete scenario

Agnese works for a small cooperative that specializes in local tourism. Until now, whenever a client requested a personalized booking platform, they had to turn to an external agency, which was prohibitively expensive. One day, she decided to try out an AI-powered development tool: she described in words what she wanted — “a portal where tourists can view itineraries, book guided tours, and pay online” — and within hours, she had a working prototype. The tool generates code, an interface, and even the payment processing. Agnese doesn’t have to write a single line of code: her job is to supervise, ensuring the itineraries are clear and the prices are accurate. Within a week, the site was online, and the cooperative could finally offer a digital service without having to spend huge sums. The experience also changed her role: from “a client commissioning software” to “product owner” capable of shaping and directing projects. Her task is no longer technical, but strategic: defining what is needed, who needs it, and why. AI does the rest.

Socio-economic consequences

The human role will not disappear, but it will change profoundly. The developer will change from a “code worker” to a strategic product owner, focused on goals, priorities, and the overall vision. People will be freed from repetitive tasks to focus on innovation and product value. Certain technical skills will lose their centrality in the labor market, while demand will grow for professionals capable of managing complex projects, translating business needs into clear requirements, and supervising the work of AI agents. Overall, software professions will be less hands-on and more focused on strategy, communication, and the ability to drive intelligent systems.

Examples

Some tools already show the potential of this evolution. Replit, Lovable, and Bolt. They allow users to describe an app in natural language and create a working prototype in just a few minutes, complete with front-end, back-end, testing, and deployment. These solutions are still limited to simple projects, but they demonstrate how automatic software generation is moving from theory to practice.

Time horizon

0–2 years — Automatic software generation tools will continue to grow in code quality (bug reduction, assisted refactoring) and usability. Already today, beyond 30% of professional developers use GitHub Copilot or equivalent (Stack Overflow Survey 2024). Immediate risks: inference costs remain high for large projects, fragmentation among non-interoperable tools, and licensing/copyright concerns about generated code.Indicators: share of companies integrating AI coding assistants into official development processes, growth of the “AI in Software Development” market estimated at about $20 billion by 2026 (Markets&Markets).

3–5 years — The platforms will be able to manage more complex full-stack projects, including mobile apps and systems integrated with APIs and databases. During this period, up to 40–50% of code in new company projects could be produced by AI (McKinsey, 2024). The human role will shift towards supervision, architectural design, and strategy, with the main risks related to safety, quality of training datasets, and the difficulty of auditing the generated code. Indicators: diffusion in SMBs and enterprise IT departments, increasing number of “AI-first” platforms used for MVPs and prototypes.

5–8 years — Automatic software generation will become common practice, especially for startups and SMEs that will be able to launch complete products with small teams. Almost entirely AI-driven pipelines will manage development, testing, and deployment, leaving humans with a key role as supervisor, strategist, and validator. Opportunity: drastic cost reduction and accelerated innovation cycles. Risks: market concentration in a few global providers, inconsistent security and compliance standards, and the potential loss of technical know-how among new generations of developers.

7. Cybersecurity and the invisible war between AI

Artificial intelligence isn’t just a driver of positive innovation: it can also become a weapon in the hands of attackers. Already today, we can glimpse agents capable of analyzing complex systems and quickly identifying vulnerabilities to exploit, and in the future, these attacks will become increasingly sophisticated, to the point of rendering traditional defenses insufficient. The only effective response will be to rely on AI-based defenses, agents capable of constantly monitoring systems, detecting anomalies in real time, and responding with immediate corrections.

Image generated by author

A concrete scenario

Enrico is the IT manager of a medium-sized manufacturing company. One morning, he receives an alert from the security system: a suspicious access to the company servers from a foreign IP address. Before his team can even open the logs, the AI ​​defense agent has already identified the anomalous behavior, isolated the compromised machine, and redirected traffic to a secure environment. Meanwhile, a concise report appears on Enrico’s screen: “Intrusion attempt using stolen credentials. Mitigation complete. No data exfiltrated.” The attack is never noticed by his employees, who continue working uninterrupted. What Enrico doesn’t see is the “invisible” battle taking place behind the scenes: another agent, this time hostile, was testing vulnerabilities in the system. The defensive AI reacted faster than a human could ever have, updating its protection algorithms in real time. Enrico, instead of spending hours putting out fires, can focus on strengthening company procedures and training staff. Its function shifts from reaction to attack to strategic prevention.

Socio-economic consequences

The transformation of cybersecurity into an invisible war between AI has implications that go beyond technology. At the geopolitical level, the spread of offensive agents developed by states, criminal groups, or terrorists could trigger a true digital arms race, with escalation risks that are difficult to control. It is therefore urgent for governments and institutions to define international rules and agreements that balance innovation and security. Businesses and citizens will also be affected: hyper-realistic phishing attacks, customized malware, and deepfakes will become accessible to everyone, requiring a combination of technological defenses and cultural training. On the positive side, AI can make systems more resilient, reduce response times from days to seconds, and automate patch releases, but the question of trust remains: to what extent can we delegate digital defense to autonomous entities?

Examples

The market is already showing the first concrete signs. Darktrace, with its Antigena platform, uses AI to detect and neutralize threats in real-time; Microsoft Security Copilot integrates language models to translate complex logs into defensive actions. Startups like Reco monitor the anomalous use of SaaS applications, while emerging realities, such as Vastav.AI, are developing countermeasures against deepfakes. Academia is also contributing: projects like CYGENT or HuntGPT. They are experimenting with models capable of transforming huge volumes of logs into clear, prioritized alerts, reducing the burden on human operators.

Time horizon

0–2 years — The solutions of cyber defense AI-driven will become more widespread and refined, especially for functions of automatic triage (filter real events from false positives) and immediate response to known threats. Already today, over 35% of global companies use AI systems for cybersecurity(Capgemini, 2023). Current risks: high number of false alerts, integration costs, and lack of qualified personnel to supervise agents. Indicators: share of IT budget allocated to AI-driven solutions (today approximately 15–20%, Gartner 2024), number of attacks detected and neutralized without human intervention.

3–5 years — AI agents will be permanently integrated into corporate and government infrastructures, capable of defending themselves even from never-before-seen attacks through few-shot and continual learning techniques. Within this window, it is expected that over 60% of large enterprises will adopt native AI in cybersecurity (BCG, 2024). Opportunity: almost zero reaction times, predictive ability on attack patterns. Risks: possible vulnerabilities of the AI ​​models themselves (data poisoning, adversarial attacks), dependence on centralized cloud providers, and difficulty in auditing and explaining decisions.

5–8 years — The systems will reach a level of proactive autonomy, capable of recognizing unprecedented patterns and adapting defense strategies in real time. This could lead to “autonomous digital warfare” scenarios, with agents directly engaging each other in cyberspace without human intervention. The challenge will not only be technical, but also ethical and political: who is responsible for an automated counteroffensive? How can we govern a digital conflict waged by increasingly autonomous machines? Indicators: first international regulatory policies for the use of autonomous AI in cyberwarfare, percentage of incidents mitigated without direct supervision, and documented cases of escalations avoided or exacerbated by AI.

8. No more one-size-fits-all interfaces, but tailor-made experiences

For years, the web was a “uniform” environment: sites had a fixed layout, standard graphics, and a precise way of representing data. Whether it was statistics on the cost of living, sports scores, or company balance sheets, users were forced to adapt to the rendering chosen by the content producer. With the arrival of AI agents, this paradigm is fading. Agents can extract raw data from any content and regenerate it into dynamic, personalized representations. The same set of information can take on different forms depending on the context, preferences, and even the cognitive abilities of the individual user.

Image generated by author

A concrete scenario

A small consulting firm manages its performance through an AI system that collects raw data on clients, revenue, and project schedules. The same numbers are rendered differently depending on who consults them, because the AI ​​builds each view tailored to each user’s preferences and habits. The owner, Giorgio, opens the dashboard and finds a panel modeled after his decision-making style: cash flow projections, margins on individual projects, insolvency risks, and industry benchmarks. This information is selected and presented in his preferred format — predictive graphs and comparative tables — to guide his strategic decisions. Marta, the project manager, accesses a visualization built around her criteria: an operational dashboard with progress bars, hours allocated to teams, and visual alerts on delays. The view reflects her need for immediacy and practical control, without getting lost in financial details. The same system thus becomes two different tools: strategic for Giorgio, operational for Marta. The AI ​​doesn’t change the data, but interprets and reorganizes it based on the viewer, transforming the same numerical reality into personalized and relevant experiences.

Socio-economic consequences

The ability to completely decouple data from its visual representation opens up a new ecosystem of services. Data will increasingly become an “API” service, distributed in raw form and made accessible through user-tailored visualizations. This brings enormous advantages in terms of accessibility, transparency, and inclusiveness: anyone will be able to read data in the format best suited to their needs. But new challenges also emerge: if every user sees a different representation, who guarantees that the substance has not been altered or distorted? For companies, the impact is significant: competition will no longer be just over data ownership, but over the ability to reliably interpret it. On the social side, the information experience risks fragmentation: “shared truth” could give way to subjective and potentially manipulable representations.

Examples

Some signs of this transition are already visible. Fitness apps and educational platforms are adjusting their content and layout based on individual behavior. Tools like Google Stitch generate interfaces from text prompts or images, while solutions like Polymer AI Dashboard Generator create custom data visualizations. On the academic front, prototypes such as Drillboards or SituationAdapt. They showcase adaptive dashboards and mixed reality interfaces that can transform based on the user’s context and skills.

Time horizon

0–2 years — The solutions will remain limited to adaptive layout and simple basic preferences (e.g., light/dark mode, text resizing, dashboard customization). Indicators: share of consumer apps with adaptive UIs over 30% (data already observed in mobile banking and e-learning, Statista 2024). Risks: lack of standards, with fragmented experiences across platforms; perception of “cosmetic novelty” that limits actual adoption.

3–5 years — The first solutions will emerge as dynamic generative interfaces, capable of changing in real time based on the user’s actions. Indicators: penetration of the concept of “adaptive dashboards” in B2B SaaS (over 25% of enterprise tools by 2028, Gartner); first ISO guidelines for generative design. Risks: latency and computational costs in live adaptation; fears of loss of control by users (“the interface decides for me”).

5–8 years — UIs will really become proactive, learning from the environmental context and habits, until they anticipate needs without explicit intervention. Indicators: diffusion of devices with adaptive UIs (projection over 100M global users by 2030, McKinsey); increased spending on generative UX. Risks: privacy violations if personal context is tracked opaquely; dependence on a few global providers that control generative UI libraries.

8–10 years — Interfaces will become elastic and pervasive, transforming not only based on personal profile, but also on emotional factors (mood, stress, fatigue) or environmental factors (light, noise, social context). Indicators: First clinical studies on the use of emotional UIs in healthcare and edtech; penetration into government and institutional systems. Risks: risk of “over-adaptation,” which reduces pluralism and comparison (everyone sees only their own version of digital reality).

9. The video game that never repeats itself

Artificial intelligence is revolutionizing gaming, introducing engines and agents capable of dynamically generating scenarios, characters, and missions. The underlying idea is powerful: no two games will ever be the same. Real-time environments, natural-interaction NPCs, and missions that adapt to the player’s profile mark the transition from a scripted and deterministic model to self-generated and unique experiences.

Image generated by author

A concrete scenario

There are two players, Aria and Zeno. They begin the same generative game on an autumn day. They both start in a mysterious village on the edge of a forest, but thanks to the AI ​​agent, their experiences diverge completely. Aria, a lover of storytelling, finds a path filled with poetic dialogue, characters with moral depth, and quests that push her to explore the human side of the forest: among sages who speak in riddles and tree spirits who share ancient stories. Zeno, on the other hand, prefers strategic action: his itinerary is dominated by warlike trials, dungeons with aggressive creatures, and combat that requires timing and tactical decisions. The AI ​​shapes the game world not with fixed scenarios, but by adapting settings, tone, and conflicts to its style. At the end of the session, Aria and Zeno compare notes: they have played games with the same title, but Aria has discovered legends and secrets, and Zeno has overcome challenges and clashes. Both have the same starting point and shared data, but views and paths tailored specifically for them.

Socio-economic consequences

For the video game industry, the “no two games are the same” paradigm presents both opportunities and challenges. On the one hand, the longevity of titles could increase dramatically, with games capable of entertaining for years without becoming repetitive, enabling new business models based on subscriptions, personalization, and tailored experiences. On the other hand, the risk of losing creative control emerges: a content generation that is too autonomous could sacrifice narrative coherence, balance, and overall quality. Studios will have to reinvent their pipelines and roles: less manual work on assets, more strategic oversight and direction. For players, this means more engaging and personalized experiences, but also the risk of cultural fragmentation: the “shared game” that builds community could give way to individual and unique experiences.

Examples

The sector is already an active laboratory. Inworld AI offers NPCs with memory and objectives that communicate naturally (demo Inworld Origins). Artificial Agency works on goal-driven behavioral engines for non-scripted characters modl.ai develops agents for QA and level balancing, simulating “virtual players”. Scenario and Kaedim accelerate the creation of 2D/3D assets, while Latent Technology generates reactive animations. On the narrative front, Charisma.ai and Hidden Door experiment with multi-branch storytelling, while UGC worlds like DreamWorld integrate generative construction. Among the playable products, AI Dungeon was the first example of real-time generative storytelling; even mainstream titles like Candy Crush use AI to adapt, albeit within limits, to level generation.

Time horizon

0–2 years — Early prototypes and indie titles with semi-generative narrative and environments. The games offer alternative missions and more fluid dialogue thanks to LLM, but narrative coherence remains fragile. Indicators: Number of indie games using LLM/conversational NPCs; beta testing adoption on platforms like Steam Early Access.Limits: weak narrative coherence, high inference costs, and a lack of integrated authoring tools for developers. Risks: inflated expectations compared to actual capabilities; risk of inconsistent or inappropriate content.

3–5 years — Birth of real hybrid game engines, where level design, missions, and dialogue are dynamically generated based on the player’s profile. Open-world environments that adapt to preferences (exploration vs. combat).Indicators: adoption in mid-tier/AAA titles; integration of generative plugins into major engines (Unity, Unreal); growth of dynamically generated assets (Scenario, Kaedim, Inworld). Limits: latency in real-time content generation; lack of standards for testing and balancing generative gameplay; difficulty ensuring balanced experiences. Risks: excessive variability that undermines online competitiveness; unbalanced generated content (missions that are too easy/too difficult).

5–8 years — Diffusion of proactive engine: the game not only responds to actions, but anticipates the player’s style, offering tailored narratives. Stories are no longer branched but truly open, constructed in real time. Indicators: over 30% of AAA games integrate generative systems for questing, storytelling, and world-building; growing active communities developing mods based on AI-driven engines. Limits: training sets still expensive; difficulty maintaining cross-session consistency (remember 100+ hours of play). Risks: loss of creative control by developers; risk of increased addiction (experiences that are too “tailor-made”).

8–10 years — Arrival of fully generative game engines: Universes that are built from scratch each session, with narrative coherence, adaptive rules, and persistent worlds that evolve alongside the players. Every game is unique. Indicators: Generative engines as standard in AAA titles; first fully generative UGC (user-generated content) platforms powered by AI; widespread use in VR/AR. Limits: massive cloud infrastructure required; high energy costs; need for new metrics to balance generative gameplay. Risks: concentration in a few global providers (generative engine monopolies); ethical risks related to uncontrollable narratives (bias, toxic or manipulative content).

10. Machines that design machines

For centuries, design has been the exclusive domain of human ingenuity. Engineers, architects, and designers have always been tasked with analyzing constraints, developing solutions, and designing complex systems. But what will happen when this ability passes — at least in part — to machines?

AI will not be limited to writing software or generating personalized experiences. In the near future, it will be able to conceive complete engineering systems: infrastructures, systems, electronic circuits, even mechanical components, and urban architecture. We’re not talking about simple CAD-assisted designs, but real digital co-designers, capable of evaluating scenarios, simulating performance, optimizing materials, and suggesting innovative solutions that a single human team would find difficult to explore.

Image generated by author

A concrete scenario

In a small mechanical design company, Livia, the chief engineer, and Carlo, a young project manager, must develop a new cooling system for a line of industrial drones. Traditionally, this would have required weeks of analysis, CAD drawings, and iterative simulations. This time, however, they activate a specialized AI agent: Livia enters only the functional requirements (“cooling up to 40°C in high-dust environments, maximum weight 200 grams”), while Carlo specifies the financial and supply constraints. In a few hours, the AI ​​generates dozens of design variants complete with technical diagrams, fluid dynamic simulations, and cost estimates. The system doesn’t work blindly: it shows Livia the engineering results with graphs and stress tests, while Carlo receives financial dashboards, material comparisons, and production time estimates. Everyone receives a personalized view, built on their priorities and skills. Ultimately, the team no longer discusses preliminary designs to be refined, but chooses from already simulated and validated solutions. In practice, design becomes a strategic selection process, with AI acting as an invisible driver of innovation.

Socio-economic consequences

This development could dramatically reduce design times and costs, lowering barriers to entry in capital-intensive sectors such as automotive, construction, or advanced manufacturing. Small companies and startups could access design capabilities previously reserved for large industrial groups. On the other hand, concentration risks will emerge: whoever controls the most powerful design models and datasets will have a huge, potentially unbridgeable competitive advantage. Furthermore, the role of engineers will change: fewer designers and more supervisors, called upon to ensure safety, ethics, and regulatory compliance.

Examples

We are already seeing the first signs today. Systems like Autodesk Generative Design or Siemens NX with integrated AI allow users to explore thousands of design variants optimized for weight, strength, or cost. In the semiconductor industry, tools such as Synopsys DSO.ai. They design chips with reduced power consumption and improved performance. In construction, experiments with generative urban design. They create entire virtual neighborhoods by evaluating traffic, energy consumption, and environmental impact. These are still support tools, but they represent a preview of what could become a nearly autonomous design cycle.

Time horizon

0–3 years — Diffusion of vertical generative design tools, already growing today: AI software for chip design (e.g., Synopsys DSO.ai), modeling of mechanical components (AI-driven generative design), and parametric architecture(Autodesk Forma). Indicators: share of manufacturing companies that adopt generative design tools (currently estimated at 18% globally, McKinsey 2024), number of AI-assisted patents in the engineering sector. Current limitations: limited capacity in narrow domains, high reliance on proprietary training datasets.

3–5 years — Appearance of systems capable of orchestrating the entire design cycle in specific sectors: not only designing concepts, but also integrating physics simulations, choice of materials, and cost analysis. Indicators: first implementations in automotive and modular construction, an increase in estimated project reduction times up to 30–40% (BCG, 2025). Risks: difficulty in validating structural safety, computational costs of multi-variable simulations, regulatory resistance for critical applications (infrastructure, aerospace).

5–10 years — Birth of real“independent design laboratories”: AI ecosystems in which models generate concepts, validate them through complex simulations, and propose solutions ready for production. Humans remain supervisors and decision makers, but much of the pipeline — from engineering creativity to verification — is automated. Indicators: share of complex projects (bridges, chips, turbines) largely generated by AI, reduction of industrial R&D costs estimated up to 50% (OECD, 2026). Risks: concentration of power in the companies that control the design platforms, lack of transparency in algorithmic decisions, and possible biases embedded in simulation models.

11. A doctor in every pocket

Anyone who has tried ChatGPT or similar systems has found themselves, at least once, asking for advice on symptoms or treatments. Today, these responses should be treated with caution, but in the future, AI could become institutionalized interlocutors within healthcare systems. Medicine already uses machine learning models to analyze X-rays, MRIs, or blood tests, identifying anomalies that the human eye might miss. Increasingly advanced generative models add the ability to communicate with patients, gather information, cross-reference it with large knowledge bases, and propose diagnostic or therapeutic hypotheses.

Image generated by author

A concrete scenario

Sofia lives in a small mountain village where her GP is only available twice a week. One evening, she feels persistent chest pain and, unsure whether to go to the emergency room immediately, opens her healthcare app, connected to the biometric bracelet she’s been wearing for months. The AI ​​agent collects her vital data in real time (heart rate, oxygen saturation, blood pressure), comparing them with her medical history and millions of similar cases in a certified database. After a few seconds, the platform provides her with a risk assessment: not just a generic warning, but a detailed triage with percentages, possible causes, and a clear recommendation: “Go to the nearest emergency room immediately. We’ve notified the on-call doctor, who will receive your updated data in real time.” When Sofia arrives at the hospital, the cardiologist doesn’t have to start from scratch: on his tablet, he finds an AI-generated report already available, with graphs of parameter trends, a summary of her medical history, and possible diagnoses ranked by probability. This allows him to intervene immediately, saving precious time. For Sofia, AI was a vital filter and mediator between her and the healthcare system. It didn’t replace a doctor, but it acted as a bridge between her symptoms and specialized care, transforming a concern into a lifesaving action.

Socio-economic consequences

The impact of this transformation would be enormous. AI can reduce healthcare costs through faster diagnoses, optimize resource use, expand access to care in countries with a shortage of doctors, and enable mass preventive medicine based on continuous monitoring. At the same time, significant risks emerge: healthcare data management requires high standards of privacy, and legal liability for diagnostic errors remains an unresolved issue. The human role does not disappear: empathy, clinical judgment, and responsibility remain essential, but the doctor of the future will work side by side with AI, in a more efficient and inclusive system.

Examples

Concrete applications already exist today. Symptoma, Babylon Health, and Ada guide users through initial triage; Aidoc, PathAI, and Zebra Medical Vision apply AI to medical image analysis, identifying anomalies invisible to the human eye. Microsoft Dragon Copilot helps doctors transcribe visits and summarize clinical data, reducing bureaucratic burden. Companies like DxGPT and Cera demonstrate how AI can support GPT-based diagnoses or predict risks for elderly patients.

Time horizon

0–3 years — Capillary diffusion of virtual assistants per initial triage, booking management, and patient-hospital interactions. At the same time, the growing use of AI in medical image analysis (X-rays, MRIs, CT scans)is being conducted with constant supervision by doctors. Indicators: over 30% of hospitals in Europe already declare to use AI for diagnostic support (OECD Health Data, 2024); the global AI market in healthcare is estimated at $28 billion in 2025 (Statesman). Limits: risk of false positives/negatives, lack of diversified datasets, and high costs of integration into hospital systems.

3–5 years — AI integrates deeper into clinical processes: dissemination of predictive systems for chronic diseases (diabetes, heart failure), continuous monitoring through intelligent wearables, and consolidation of institutionalized digital assistants to support doctors. Indicators: growth of the market of medical wearable devices with integrated AI, expected to surpass $60 billion by 2027(Markets&Markets); first clinical guidelines on AI adopted by regulatory bodies (e.g., FDA, EMA). Risks: regulatory resistance, fears about health data privacy, and poor interoperability between hospital systems.

5–8 years — AI could take a recognized institutional role, with certified agents that contribute directly to the diagnostic and therapeutic process. Advanced telemedicine and fully personalized care will be regulated by clear regulatory frameworks on responsibility and privacy protection. Indicators: percentage of diagnoses co-signed by AI in national health systems; share of digital health records managed with AI predictive modules (expected over 50% by 2030, McKinsey Health Report). Risks: inequalities of access (gap between high and low-income countries), risk of blindly relying on systems that are not always transparent, and potential resistance from professional categories.

12. Stories that are written as you read them

Reading has always been a linear experience: an author writes, a reader reads. With the arrival of AI, this paradigm could radically change. No longer will texts be the same for everyone, but dynamic, personalized books that no one else will ever read the same way. This idea has its roots in the gamebooks of the 1980s and 1990s, in which readers could choose different narrative paths: innovative but limited experiences, because the twists and turns and endings were always predetermined by the author. With AI, however, the possibilities become virtually infinite: stories that write themselves as they are read, shaped by each reader’s choices, preferences, and even reading style.

Image generated by author

A concrete scenario

Imagine Giulia and Lorenzo downloading the same AI book from the platform. The opening lines are identical: a young journalist moves to a new city, where a sudden blackout throws everything into chaos. For Giulia, a fan of mystery and action, the story immediately takes on the pace of a thriller: clandestine investigations, hackers at work, and a criminal network exploiting the blackouts to cover up illegal trafficking. The journalist becomes the protagonist of a race against time, filled with chases, suspicions, and political conspiracies. For Lorenzo, however, the plot transforms into a romantic drama: the blackout becomes the backdrop for an encounter with an unknown neighbor. The tension of the locked-down city brings out emotions, intimacy, and unexpected connections, leading the protagonist to experience a love entanglement she never imagined. In the end, Giulia and Lorenzo discover they have read two radically different stories, yet both coherent and engaging: the author had established the characters and context, while the AI ​​had modulated the narrative genre and development according to each reader’s preferences.

Socio-economic consequences

Such a leap would profoundly transform the publishing market and the cultural experience. On the one hand, reading would become more engaging and accessible, especially for new generations accustomed to the interactivity of video games. On the other hand, the collective dimension of the book as a shared experience risks dissolving: we will no longer read the same novel, but unique and unrepeatable versions. For publishing, it will mean redefining business models: from the sale of identical copies to the “pay-per-experience” model, personalized digital libraries, and interactive subscriptions. The role of the writer, however, remains central: no longer the author of every detail, but a narrative architect who establishes the universe, tone, and coherence, leaving the dynamic execution to AI.

Examples

Ongoing experiments show that publishing is moving in this direction, even if we’re still far from “infinite” storytelling. Startups like Inkitt and StoryFit use AI to predict book success, generate voices for audiobooks, or suggest personalized readings. Tools like AI Interactive Books enrich the texts with multimedia elements or quizzes, while experiences such as Inanimate Alice offer hybrid narratives with interactive minigames. These are interesting experiences, but they remain tied to limited and predetermined paths: true dynamic narrative generation is yet to come.

Time horizon

0–3 years — Growth in the diffusion of books enriched with light interactivity: multimedia elements (quizzes, dynamic images, audio links) and surface customizations based on reading preferences or style (e.g., faster or more descriptive pace). However, endings and structure still remain fixed and predetermined. Indicators: share of e-books with advanced interactive functions, today estimated at 5–7% of the global market(PwC, 2024); publishers are increasingly experimenting with “enhanced e-books”. Limits: lack of interoperable standards, risk of fragmented experiences across different platforms.

4–6 years — truly dynamic narratives, with plots and endings that are shaped by the reader’s choices and the data collected from their interactions. No more rigid branching paths, but stories generated in real time. Indicators: growth in dedicated AI-driven publishing platforms (currently fewer than 50 startups listed globally); first partnerships between traditional publishers and generative model providers.Risks: limited narrative coherence, licensing costs of the models, skepticism of the authors with respect to “creative delegitimization”.

6–8 years — AI will reach a maturity that will ensure stylistic and narrative coherence in dynamically generated texts. Publishers will be able to distribute on a large scale unrepeatable books, which are never read the same way. Indicators: percentage of editorial catalogues with dynamic generation modules (expected over 20% by 2032, Deloitte Media Report); growth in subscriptions to personalized reading platforms. Risks: loss of shared collective experiences(everyone reads different versions), difficulty in validating content (historical plausibility, scientific accuracy), and new ethical questions on the role of the author.

13. Your personal director

If books can become dynamic and personalized, why not films and TV series as well? It’s not far-fetched to imagine a future where every viewer can have their own version of an audiovisual work, tailor-made. AI is already capable of producing credible and coherent films lasting a few dozen seconds; tomorrow, these technologies could extend to entire episodes or full-length feature films. Traditionally, cinema has always been a passive medium, but this paradigm could also change: branched movies and series, with scenes, dialogues, and endings that adapt to the viewer’s choices, would transform viewing into an interactive experience.

Image generated by author

A concrete scenario

Marta and Karim access the same AI cinema platform and choose a film with the same premise: a group of strangers are stranded at an airport due to a sudden storm. For Marta, a suspense enthusiast, the film unfolds like a psychological thriller: the passengers suspect someone is manipulating the entire event, the dialogue becomes tense, and the storm seems like the prelude to an orchestrated plot. For Karim, however, the same story transforms into a dystopian sci-fi: the airport becomes a government control hub, the storm is the result of failed climate experiments, and the passengers discover they are part of a large-scale social test. Surveillance and collective rebellion dominate the plot, culminating in a denouement that calls individual freedom into question.

Socio-economic consequences

The impacts would be enormous. For viewers, it would mean constantly changing content, personalized to their tastes and even their decisions. For the industry, it would mean the opportunity to drastically reduce production costs, accelerate creative cycles, and experiment with new business models, from subscriptions to dynamic pay-per-view experiences. From a cultural perspective, however, significant risks emerge: the collective knowledge of cinema, made up of shared works and iconic scenes discussed by all, could give way to individual and unrepeatable narratives, eroding the social function of cinema as a common language.

Examples

Some concrete signs are already visible. Tools like Runway and HeyGen allow the generation of video clips or realistic digital avatars, while Meta, in collaboration with Blumhouse, has presented a model capable of producing video sequences complete with coherent audio. Startups like Odyssey are experimenting with interactive 3D streaming environments, where viewers can move and influence the scene. Projects like Evertrail generate characters, dialogue, and settings in real-time based on audience interactions. Consumer tools like Canva, with its AI video generator, also offer a first glimpse of this revolution.

Time horizon

0–3 years — Generative technologies will find applications in short clips, teasers, advertisements, and all stages of editing and post-production (e.g., scene regeneration, automatic dubbing, and the insertion of digital avatars). Indicators: today, less than 10% of professional video content uses generative AI (PwC Media Outlook, 2024), but the growth trend is estimated to exceed 40% per year. Limits: inconsistent visual quality, difficulty maintaining narrative coherence, and high costs of generating long-form video.

3–5 years — The first examples will appear as interactive short episodes or films, with alternative scenes and endings adapted to the viewer. Streaming platforms could offer personalized narratives in which choices (explicit or implicit, e.g., viewing patterns) influence the development of the story. Indicators: emergence of experimental AI-driven catalogs; first original productions on mainstream platforms. Risks: resistance from the creative industry due to fears of loss of authorial control, regulations on copyright of actors and screenplays.

5–8 years — Audiovisual works on demand will become a fully realized reality, featuring films and series with branching plots, alternative endings, and reactive characters that can evolve in real-time. Cinema will go from a passive medium to an interactive and unique experience, closer to gaming than to traditional television. Indicators: estimated share of interactive content on total new productions (between 15–25% by 2032, McKinsey Media). Risks: loss of the collective dimension of the cinematic experience, concentration of platforms in a few global players, and possible inequalities of access linked to computing infrastructure costs.

14. Blockchain at the service of collective intelligence

Blockchain was created to certify processes, making data immutable, transparent, and verifiable without intermediaries. But its logic remains rigid: smart contracts, although called “intelligent,” only execute simple, deterministic instructions. Artificial intelligence represents the other side of the coin: flexible, predictive, and adaptive, capable of managing complex decisions, but opaque like a black box and vulnerable to bias and manipulation. The integration of the two technologies thus paves the way for a new paradigm: autonomous, trustless, and permissionless systems that are both intelligent and verifiable. In this model, blockchain certifies data and decisions, while AI provides the missing adaptive capability.

Image generated by author

A concrete scenario

Imagine AuroraDAO (not our real name), a global community that manages renewable energy projects. Its members are spread across the globe: a young engineer in Brazil, an environmental researcher in Kenya, and a small solar company in Germany. They all vote and participate in decisions via the blockchain, which certifies every proposal, vote, and transaction immutably and transparently. At the center, an AI agent acts as the “brain”: it analyzes data on climate, energy demand, and material prices, proposes concrete scenarios (e.g., “building three micro-wind farms in sub-Saharan Africa is now 20% more efficient than a solar farm in India”), and summarizes the pros and cons for the community. When members approve, the smart contract executes automatically: funds are moved, suppliers are selected, and milestones are tracked. The AI ​​continues to oversee progress, adapting plans if unforeseen circumstances arise (a storm, a supply crisis, a regulatory change). In this model, governance is neither fully human nor fully algorithmic: it is an auditable fusion. AI brings the ability to analyze and predict, and blockchain ensures that no one can manipulate the rules or corrupt the processes. The result is a community capable of managing complex projects on a global scale with an efficiency and transparency impossible to achieve with traditional models.

Socio-economic consequences

The potential impacts are profound. On the one hand, the combination of AI and blockchain could drastically reduce the need for intermediaries, lower transaction costs, and foster more transparent, resilient, and accessible global markets. On the other hand, new challenges emerge: who will control the AI ​​models powering such powerful systems? How can biases be prevented from becoming structural and immutable once recorded on-chain? And what role will governments and regulatory institutions have if economic or political decisions are made by autonomous and decentralized entities? This opens up fertile, but also risky, ground for redefining governance, trust, and the distribution of power.

Examples

Some prototypes are already emerging. In DeFi, AI systems dynamically adjust interest rates and liquidity while maintaining the transparency of on-chain transactions, as shown by AgileRate and the experiments discussed in Cointelegraph on AI-driven DeFi. Intelligent DAOs are experimenting with decision-making processes in which AI processes complex scenarios, while blockchain guarantees uncorrupted and verifiable executions — examples include the multi-agent approach of ISEK and the intent-based strategies of SuperIntent. In data marketplaces, blockchain certifies provenance and ownership, while AI extracts value through insights and predictions, as explored in AIArena and frameworks like opML. These are still early experiments, but they clearly point the way: infrastructures that combine adaptive intelligence and verifiable transparency.

Time horizon

0–3 years — We will see the first early prototypes, especially in DeFi (dynamic rates, AI-regulated liquidity) and supply chain (certified traceability + risk prediction). Indicators: an increase in the number of pilot projects integrating AI on-chain; first partnerships between startups and large logistics/financial companies.Limits: high on-chain computation costs, lack of shared standards, and risks of unverifiable bias in the models.

3–5 years — They could emerge hybrid platforms with autonomous governance: Intelligent DAOs where AI processes complex scenarios and blockchain certifies decisions and votes. Indicators: first cases of DAO with deliberative AI; regulators are starting to define dedicated policies. Risks: institutional resistance due to fear of loss of control, vulnerability to attacks on models or input data.

5–8 years — The AI ​​+ blockchain convergence could turn into a new economic and institutional infrastructure: markets and organizations capable of taking complex decisions without central hierarchies. The impact would extend to finance, energy, urban governance, and even local politics. Indicators: adoption of on-chain verifiable AI systems in at least 10–15% of large institutions (BCG estimate 2032). Risks: concentration of power in model providers, opaque governance of the same models, and large-scale technical audit difficulties.

15. A tireless financial advisor

Finance has always been fertile ground for technological innovation, and AI is already changing the way we invest. Today, robo-advisors that build balanced portfolios and algorithmic trading systems move billions in the markets, but they remain the prerogative of banks and hedge funds. With the arrival of increasingly sophisticated agents, this barrier is breaking down: AI will be able to create personalized portfolios, adapt them in real time, and integrate unconventional signals such as social media or consumer trends. Previously exclusive tools will become accessible to everyone: a true democratization of investing.

If AI makes investing smarter, blockchain makes it transparent, verifiable, and intermediary-free. We can imagine funds managed by AI agents through public smart contracts, with immutable on-chain decisions, or DeFi protocols that dynamically regulate rates and liquidity. In this scenario, even small savers will be able to access complex logic without relying on banks or centralized funds.

Image generated by author

A concrete scenario

Amanda, a freelance consultant, doesn’t have access to corporate pension plans and has always put off building a private pension, fearful of the sector’s complexity. So she decides to rely on an AI agent. The agent collects data on her variable income, recurring expenses, her savings habits, and the level of risk she’s willing to take. Based on this, it builds a personalized portfolio, diversified across long-term bonds, global ETFs, and a marginal portion of more dynamic assets. Each month, based on Amanda’s actual income, the agent decides how much to allocate to the fund, adapting her contributions without forcing her into rigid commitments. In times of market volatility, it automatically reduces exposure to risky assets, protecting the stability of her capital; when markets are more favorable, it gradually increases dynamic assets to maximize growth. As the years pass, Amanda doesn’t have to worry about studying complex charts or comparing dozens of financial products: the AI ​​agent accompanies her every step of the way, sending her simple and clear updates and showing her a projection of her future pension. Thus, a problem that seemed unsolvable becomes a fluid, transparent process tailored to her professional life.

Socio-economic consequences

The convergence of AI and finance, whether centralized or decentralized, brings both opportunities and risks. On the one hand, it can democratize access to advanced tools, reduce information asymmetries, and increase transparency. On the other hand, critical issues emerge: the opaqueness of AI models (unexplainable decisions), volatility amplified by automatic reactions, data manipulation (false signals that mislead algorithms), the lack of protections for small investors, and the regulatory vacuum, with governments forced to regulate decentralized and difficult-to-control systems.

Examples

Several experiments are already underway. In the world of purely AI-driven financial advisors, Jump has raised $20M to empower financial advisors with intelligent workflows; PortfolioPilot helps investors monitor portfolios, optimize taxes, and receive personalized recommendations; More Wealth offers a robo-advisor that also tracks users’ psychological behavior; FP Alpha leverages AI to read complex financial documents and generate planning insights; Origin proposes a “personal AI financial advisor” integrated with budgeting and investments; Rebellion Research applies advanced quantitative models for investment recommendations; while Moneyfarm is an established European robo-advisor that builds diversified portfolios for small and mid-sized investors.

At the same time, hybrid AI + blockchain solutions are emerging: Sahara AI is developing a decentralized advisory platform that rewards users, data providers, and trainers; Roobee aims to democratize access to tokenized investments; SingularityNET is creating a decentralized marketplace for AI services, including wealth management and predictive analytics; startups like Allium combine intelligent queries with on-chain certification to analyze large volumes of data with applications in security and traceability; research projects like AgileRate propose dynamic interest rates in DeFi lending markets; while ISEK and SuperIntent are experimenting with multi-agent models and intent-based strategies for decentralized decision-making.

Time horizon

0–3 years — AI financial advisors will remain complementary tools: advanced robo-advisors, smart budgeting apps, and personalized savings agents.Indicators: Robo-advisor market growth over $3 trillion in AUM by 2027(Statista, 2024); Regulatory sandboxes are emerging in the US, EU, and Asia to test AI-driven solutions.Limits: poor transparency of algorithms, difficulty in auditing, and a low level of trust among retail users.

3–5 years — The first applications will appear as always-on independent financial advisors, capable of monitoring income/expenses, allocating funds, managing risk, and even proposing personalized pension plans.Indicators: percentage of retail investors using at least one AI advisor (McKinsey estimates more than 20% by 2030); first institutional adoptions in banks and insurance companies as automated advisory services.Risks: amplification of volatility due to automatic market reactions, possible manipulation of input data, and lack of protection for small investors.

5–8 years — Autonomous financial advice will become mainstream: personalized AI platforms will handle not only investments, but also taxation, retirement planning, and estate planning. Indicators: at least 30–40% of retail portfolios managed in AI-driven autonomous mode; first stringent regulatory requirements on explainability and legal liability of models.Risks: vulnerability to systemic crises self-generated by emerging agent behaviors, loss of pluralism in available financial strategies.

16. The Holy Grail of AI: Automating Research and Development

Of all the applications of artificial intelligence, the most ambitious is the automation of research and development (R&D). The idea is that intelligent machines will not simply assist scientists but will autonomously design experiments, formulate hypotheses, and engineer innovations. This prospect is often called the “holy grail” of AI because it requires a level of creativity comparable to that of humans — a skill whose theoretical replicability is still unknown.

What is certain, however, is that AI already plays a crucial role as a vertical assistant. In biology and medicine, it identifies correlations between genes and diseases and accelerates the design of new pharmaceutical molecules. In physics and engineering, it uncovers hidden patterns in experimental datasets and optimizes complex models. In mathematics, some models are already able to prove theorems or suggest new conjectures, opening up scenarios that were once the exclusive domain of human ingenuity. In the fields of energy and materials, it suggests innovative combinations for batteries, solar panels, or high-performance alloys.

Image generated by author

A concrete scenario

On the campus of a European biotech, a small team works on research into new antibiotics against resistant bacteria. In the past, designing and testing each molecule required months of work and enormous resources. Today, however, their lab has become a hybrid human-AI ecosystem. At night, while researchers sleep, AI agents orchestrate the work of robotic arms and automated platforms: they design new molecules, digitally simulate interactions, select the most promising ones, and launch real-world microexperiments. In the morning, the team finds the results already on the table: dozens of rejected hypotheses and two or three candidates worthy of further investigation. The team no longer has to start from scratch, but instead focuses on validating and interpreting the best results, discussing the ethical, clinical, and commercial implications. The idea-test-validation cycle, which once took months, is reduced to a few days. This doesn’t eliminate the role of researchers, but rather transforms it: less time spent repeating routine experiments, more energy devoted to strategic questions, scientific choices, and ethical oversight. Thus, the promise of the “holy grail” of AI no longer appears as science fiction, but as a laboratory operating 24/7, capable of generating knowledge at a rate never seen before in the history of science.

Socio-economic consequences

Automating R&D even partially means drastically reducing discovery and development times, shortening the idea-to-patent cycle, and reducing testing costs. Companies that adopt these tools first will gain enormous competitive advantages, capable of generating innovations in months rather than years. However, this acceleration risks concentrating power in the hands of a few players who control the most advanced platforms, raising barriers to entry for independent laboratories and less well-equipped countries. Globally, the geography of innovation could rapidly reshape itself, creating new economic and scientific polarizations.

Examples

The transformation is already visible. In biotech, Cradle Bio generates protein sequences with desired properties while reducing design-test cycles; in pharma, Eli Lilly launched TuneLab to put AI/ML drug discovery tools into the hands of small biotechs as well. The number of self-driving labs, which combine robotics (e.g., automated pipetting platforms like Opentrons) and AI models to autonomously design, schedule, and execute experiments. On the mathematical front, systems like Gödel-Prover prove theorems or propose testable conjectures. Projects with DeepMind and BioNTech aim to create real “laboratory assistants,” capable of monitoring instruments, predicting outcomes, and supporting experimental design.

Time horizon

0–3 years — AI will be the ubiquitous co-pilot of the research: data analysis, target selection, simulations, and automation of repetitive experiments. In parallel, we will see the first reliable results in Assisted in proving the theory. Indicators: increase in scientific publications reporting the use of AI (already over 7% on Nature and Science, 2024); growth of the global AI market in R&D, estimated at $20 billion by 2026 (Allied Market Research). Risks: still “black box” models that are difficult to explain; difficulty standardizing AI-driven scientific protocols.

3–5 years — Laboratories will emerge semi-autonomous, capable of covering a significant portion of the R&D cycle: molecular and materials design, experiment setup and reading, automatic iterations with increasingly standardized tools.Indicators: increasing partnerships between universities and biotech companies; growing number of self-driving labs funded by governments and VC funds. Risks: high integration costs, lack of regulatory frameworks for the use of sensitive data (especially in biotech and pharmaceuticals).

5–8 years — Automation could encompass most of the non-strictly creative phases: hypothesis generation, testing, and accelerated validation. AI-driven pipelines will be adopted by leading companies in biotech, chemicals, materials, and engineering. Indicators: at least 30–40% of molecular discoveries attributed to AI-first processes; average drug development time reduced from 10–12 to 5–7 years (OECD, 2025). Risks: concentration of power in a few actors with access to advanced AI infrastructure; ethical dilemmas on ownership of discoveries made by systems that are not entirely human.

17. Political systems and governance with AI

Politics is one of the most sensitive areas in which artificial intelligence can be applied. Experiments are already underway: in Iceland, in 2023, an AI model was used to help draft a bill; in the United Kingdom and the EU, assisted legislative drafting trials are underway. At this stage, AI acts as an institutional consultant, a sort of digital think tank: it analyzes vast amounts of economic, environmental, and social data and recommends evidence-based policies — tasks that a human team could hardly complete quickly.

Image generated by author

A concrete scenario

Imagine Diego, the mayor of a medium-sized city. Every year, he must decide how to allocate the municipal budget: public transportation, schools, local healthcare, and urban maintenance. In the past, the process was long and contentious: dozens of meetings, polarized opinions, pressure from lobbies and interest groups. With the introduction of an AI-powered deliberative platform, the picture has changed. Citizens express their priorities through an accessible digital system: some ask for more bike lanes, others for support for the elderly, and still others push for the digitalization of schools. The AI ​​collects input, eliminates duplication and manipulative messages, synthesizes data, and generates different budget scenarios, each with clear pros and cons and simulations of their social and economic impact. Diego no longer receives a sea of ​​raw opinions, but a structured map of community preferences, balanced with predictive analytics on the impact of decisions. The city council discusses the AI-generated options and makes the final decision with greater awareness and transparency. Citizens, for their part, can consult online the reasons why certain priorities were accepted and others postponed. The result is not a government “delegated to the machine,” but a more transparent, inclusive, and data-based political process: AI becomes a trusted mediator between citizens and institutions.

Socio-economic consequences

The introduction of AI into governance can improve efficiency and transparency, reduce discretion, and increase the predictive capacity of institutions. It can also foster more inclusive decision-making processes, aggregating opinions and synthesizing them without the distortions typical of polarized political debate. At the same time, significant risks emerge: a loss of democratic legitimacy, the perception of increasingly technocratic and citizen-disconnected governments, the potential for manipulation by those controlling data and models, and new forms of concentration of power. In the long run, if poorly designed, these systems could undermine trust in democracy; if well designed, they could strengthen it by offering more transparent and inclusive tools for participation.

Examples

Some experiments are already a reality. In Iceland, AI has been used for legislative drafting; the European Parliament is testing automated drafting and policy analysis tools; Taiwan has been using vTaiwan for years, a digital deliberative platform that could be enhanced by AI to synthesize citizen contributions. Software such as Polis, already in use in Seattle and Taiwan, aggregates and synthesizes public opinion, anticipating what more advanced generative systems will be able to offer.

Time horizon

0–2 years — AI will be used primarily as an analysis and drafting tool to support parliaments and ministries, with initial experiments of digital citizen engagement. Indicators: Increase in institutions testing AI-assisted legislative drafting platforms (already adopted in Iceland and the EU); percentage of policy papers citing AI tools as a source of analytical support.Risks: poor transparency of algorithms, lack of shared standards to distinguish between technical support and political influence.

3–5 years — AI can be integrated into public consultation platforms, synthesizing collected opinions and transforming them into clear and actionable proposals.Indicators: growth of AI-powered digital deliberative platforms (e.g., Polis) adopted in cities or states; increased government budgets allocated to AI-based digital democracy systems.Risks: polarization if datasets do not represent all segments of the population; risk of manipulation of contributions (e.g., bots or orchestrated campaigns).

5–10 years — We could witness the birth of AI-based permanent advisory bodies, capable of proposing large-scale allocation policies or decisions. Indicators: Number of governments institutionalizing AI-driven committees as part of the legislative process; percentage of policy proposals originating from AI-first platforms. Risks: loss of democratic legitimacy, concentration of power in those who control models and data, and regulatory resistance. The ethical and political debate will become increasingly heated: to what extent is it acceptable? Delegate political decisions to algorithmic systems?

Conclusions

The future of artificial intelligence is not distant: it is already here — in searches that no longer return links but direct answers, in tutors that adapt to each student, in systems that generate software, assist in medical diagnoses, and even suggest public policies.

This transformation, however, is far from neutral. AI is a powerful accelerator of efficiency, knowledge, and economic growth. According to McKinsey (2023), its global impact could reach up to $4.4 trillion per year, equivalent to a 5–7% increase in worldwide GDP by 2030. Other studies, from the OECD and the IMF, confirm that the adoption of AI will deeply influence productivity across nearly all industrial sectors. Yet these projections, while striking, must be interpreted with caution: they are scenarios, not certainties.

Much of what has been described in this article — multimodal assistants, generative games, self-writing software, AI-driven finance — ranges across three levels:

  • existing products and services that already shape daily life;
  • emerging prototypes and pilot projects are still confined to niche use;
  • speculative visions that may or may not materialize in the proposed timeframe.

Recognizing the differences between these levels is crucial. Otherwise, we risk confusing what is happening today with what may happen tomorrow, and underestimating the technical, cultural, and regulatory hurdles that stand in the way. AI brings opportunities for democratization, accessibility, and efficiency — but it also raises real challenges:

  • high computational and energy costs that weigh on sustainability;
  • biases and opacity that can entrench inequalities;
  • risks of power concentration in the hands of a few global providers;
  • the possibility of social fragmentation, where personalized realities weaken the sense of shared knowledge.

The real game is unfolding now — not in the robots walking beside us, but in the invisible software that mediates how we read, invest, learn, and govern. Guiding this revolution requires clear choices:

  • transparency — because we cannot entrust our future to black boxes;
  • inclusion — because the benefits of AI must reach everyone, not just a few;
  • education — because only an informed society can avoid blind dependence on algorithms.

Ultimately, the future is unwritten. AI will not only reflect human choices — it will amplify them. Whether it becomes a tool of emancipation or of inequality depends on us. The call is both simple and radical: not to be passive spectators, but active protagonists in a transformation already rewriting the rules of how we live, work, and decide together.


From Knowledge to Power: How AI Is Reshaping the World was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Reality Check for Enterprise AI

My Search for a Practical Way to Use Agentic Coding for Enterprises.

Agentic Design Patterns with LangGraph

If there’s one thing I’ve learned building AI systems over the last couple of years, it’s this: patterns matter. Whether we’re designing…

TAI #172:OpenAI’s GDPval Shows AI Nearing Expert Parity on Real-World Work

Also, Claude Sonnet 4.5, Gemini 2.5 updates, META Code World Model, OpenAI’s 250GW plans, and more.

What happened this week in AI by Louie

This week feels like a follow-up to last week’s discussion, as our attention was again drawn to both OpenAI’s escalating energy ambitions and additional studies on how AI is being utilized in the workplace. The drumbeat of progress continued with new releases, including Claude Sonnet 4.5, Google’s Gemini 2.5 Flash and Lite upgrades, and its Robotics 1.5 model, as well as a new Code World Model from Meta. OpenAI’s ambitions and hype were again on full display, with Sam Altman mentioning in an internal note that OpenAI will increase its AI compute electricity use 10x to 2GW in 2025. Not happy with 10GW last week, he now targets access to 250GW by 2033, a figure that would represent roughly half of the U.S. grid’s average demand today and imply a staggering $12.5 trillion in capital expenditure. As the industry looks toward a future of nation-scale AI investment, a new benchmark from OpenAI called GDPval provided the most in-depth look yet at what today’s models can already do for high-value knowledge work.

GDPval is a new evaluation that moves beyond academic tests and coding challenges to measure AI performance on economically valuable, real-world tasks. The benchmark includes 1,320 specialized tasks across 44 knowledge-work occupations, from lawyers and software developers to nurses and project managers, which collectively represent over $3 trillion in annual U.S. wages. Each task, designed by an industry professional with over 14 years of experience, is based on a real work product and took a human expert an average of seven hours to complete. In a blinded test, other experts from the same field then judge whether the AI’s output is better than, as good as, or worse than a deliverable produced by a human expert.

The results show that frontier models are rapidly closing the gap with human professionals. In just one year, the win/tie rate for the best models has jumped fourfold, from 12% with GPT-4o to nearly 48% with Claude Opus 4.1. While Claude 4.1 took the top spot, excelling in aesthetics and presentation, GPT-5 was close behind and showed superior performance on tasks requiring pure accuracy and instruction following. This means on a representative set of complex tasks for some of the highest-value jobs in the economy, the best AI model already has a coin-flip chance of matching or beating a human expert. Given that the new Claude Sonnet 4.5, released today, beats Opus 4.1 on most other benchmarks, we wouldn’t be surprised if LLMs have now surpassed the human parity mark on these tasks.

Source: OpenAI.

The models also performed these tasks roughly 100 times faster and 100 times cheaper than their human counterparts. Of course, that headline figure conveniently ignores the costs of human oversight, iteration, and integration required to use these models in a real-world workplace. The study also found that simple prompt improvements could boost win rates by another five percentage points, highlighting that performance gains are not just about bigger models but also smarter implementation. However, the study has clear limitations. GDPval is currently a “one-shot” evaluation; it doesn’t capture the iterative, collaborative nature of most real-world knowledge work, which involves feedback, revision, and navigating ambiguity.

Why should you care?

It’s very hard to square the incredible results of the GDPval study with the reality that most people are still struggling to get consistent value from AI on complex work. We looked at the ~220 task subset that OpenAI open-sourced, and they are indeed complex, representative pieces of work that would take an expert many hours to perform. There is no obvious flaw in the study; the flaw looks mostly to be in how most people are using these models.

The prompts used in OpenAI’s study provide a solid amount of detail and expert knowledge — tips, instructions, and warnings on things to watch for — and the models also made use of complex source documents. This is essentially prompting and context engineering 101, but we believe the vast majority of users still fall short on these fundamentals when attempting to utilize AI for high-value tasks. People are still not understanding that it is vital to pack as much of their own expertise into the model as they can; they should be trying to help the model succeed and not just trying to test how it does on its own. Beyond the technical “how,” there’s often a failure of imagination; many people simply don’t know how to begin assigning complex, multi-step work to an AI.

This is compounded by a persistent enterprise problem: access. Many workplaces don’t make it easy to use the best models with enterprise-tier security and privacy plans. The “bring your own model to work” phenomenon means many are still trying to tackle professional tasks with insecure and inferior free-tier models. The GDPval results are a profound signal that AI is ready for much more substantive work, but its economic impact is being throttled by a massive and growing competency gap. The time to master using AI for real work is now. The rapid, linear progress shown in this benchmark is the justification for the seemingly astronomical compute investments being planned. Early movers who invest in building the skills and custom workflows to leverage these capabilities will capture an enormous advantage.

Louie Peters — Towards AI Co-founder and CEO

Hottest News

1. Anthropic Introduces Claude Sonnet 4.5

Anthropic has released Claude Sonnet 4.5, an update focused on coding and agentic workflows. The model shows stronger performance on reasoning, mathematics, and long-horizon tasks. In internal tests, it maintained focus across sessions lasting more than 30 hours. On the OSWorld benchmark for computer use, Claude 4.5 reached 61.4%, compared to 42.2% for Sonnet 4. The release also introduces a VS Code extension, new API features for memory and context, and an Agent SDK. Pricing remains unchanged at $3 per million input tokens and $15 per million output tokens.

2. Nvidia Plans to Invest up to $100B in OpenAI

Nvidia plans to invest up to $100 billion in OpenAI, aiming to build massive data centers for training AI models. This move helps OpenAI diversify from its primary investor, Microsoft. Both companies signed a letter of intent to deploy Nvidia systems for AI infrastructure, with Nvidia acting as OpenAI’s preferred strategic partner for compute and networking.

3. OpenAI Launches ChatGPT Pulse

OpenAI introduced ChatGPT Pulse, a proactive feature that runs overnight to synthesize user chat history, saved memory, and optional connectors like Gmail and Google Calendar into personalized daily briefs delivered as scannable cards on mobile. Limited to Pro subscribers during preview due to high inference costs, this feature includes personalization via conversation-shared preferences, source citations similar to ChatGPT Search, and deliberate design to prevent infinite scrolling. It emphasizes safety checks, source citations, and opt-in data use, with thumbs up/down ratings for feedback.

4. Google DeepMind Releases Gemini Robotics 1.5

Google DeepMind releases Gemini Robotics 1.5, an agentic system comprising Gemini Robotics-ER 1.5 for high-level planning and reasoning, and Gemini Robotics 1.5 for execution, enabling multi-step tasks such as sorting recyclables through web search and human interaction. It achieves state-of-the-art performance on 15 benchmarks, including ERQA (embodied reasoning QA) and Point-Bench (manipulation), with transfer learning across robot embodiments without requiring specialization. Available in preview via Google AI Studio API, it uses natural language for interpretable reasoning sequences and supports tools like Google Search.

5. Microsoft Adds Anthropic’s AI to Copilot

Microsoft has integrated Anthropic’s Claude Sonnet 4.5 and Claude Opus 4.1 into Microsoft 365 Copilot, making them available alongside OpenAI’s models in the Researcher agent and Copilot Studio. Microsoft now lets users choose between OpenAI and Anthropic models within its Microsoft 365 Copilot and Copilot Studio environments.

6. Databricks Will Bake OpenAI Models Into Its Products

Databricks announced that enterprises can now run OpenAI’s GPT-5 models directly within its platform, with support for SQL, APIs, Model Serving, and Agent Bricks. The integration provides governance, monitoring, and security controls for enterprise deployments, enabling organizations to utilize GPT-5 on their own data without requiring additional setup. The move positions Databricks as a hub for building AI agents on enterprise data while maintaining compliance and visibility.

7. Alibaba To Offer Nvidia’s Physical AI Development Tools in Its AI Platform

Alibaba is integrating NVIDIA’s Physical AI software stack into its cloud platform, targeting applications in robotics, autonomous driving, and smart spaces such as factories and warehouses. The tools can generate 3D replicas of real-world environments to create synthetic training data for AI models. As part of the move, Alibaba is also expanding its infrastructure globally, with new data centers coming online in Brazil, France, and the Netherlands, and increasing its AI investments beyond its previous $50 billion target.

Five 5-minute reads/videos to keep you learning

1. Integrating CI/CD Pipelines to Machine Learning Applications

This guide shows how to automate deployment for machine learning applications using a serverless AWS Lambda setup. It walks through a GitHub Actions workflow that runs automated tests, scans for vulnerabilities with Snyk, and builds container images via AWS CodeBuild. After a manual review step, a separate workflow deploys the image to Lambda. The article includes detailed configurations for IAM roles, secure OIDC authentication between GitHub and AWS, and an optional Grafana setup for advanced monitoring.

2. Six Ways to Control Style and Content in Diffusion Models

This article explores six techniques for controlling image generation with diffusion models. It compares resource-intensive methods, such as Dreambooth, with lighter alternatives, including LoRA and IP-Adapters. It also explains how ControlNets provide precise structural guidance. The analysis also highlights trade-offs across the approaches, concluding that combining IP-Adapters for style with ControlNets for structure produces the most reliable results.

3. CSV Plot Agent with LangChain & Streamlit: Your Introduction to Data Agents

This tutorial demonstrates how to create a CSV Plot Agent that automates exploratory data analysis using natural language queries. Using LangChain, GPT-4o-mini, and Streamlit, the agent incorporates Python tools for validating data schemas, identifying missing values, and generating plots, including histograms and scatter plots. The walkthrough covers tool definition, model configuration, agent logic, and UI development with Streamlit.

4. ATOKEN: A Unified Tokenizer for Vision Finally Solves AI’s Biggest Problem

The article explains how ATOKEN is a unified visual tokenizer that handles images, videos, and 3D objects within a single neural architecture. ATOKEN overcomes the traditional need for separate systems for image generation, video processing, and 3D modeling, treating all visual content types in a shared coordinate space that enables models to learn across modalities. The article also highlights how ATOKEN enables perfect 4K image reconstruction, complex video understanding, and 3D model generation, all from a single model.

Repositories & Tools

  1. Chrome DevTools MCP lets your coding agent (such as Gemini, Claude, Cursor, or Copilot) control and inspect a live Chrome browser.
  2. ShinkaEvolve is a framework that combines LLMs with evolutionary algorithms to drive scientific discovery.
  3. Qwen3Guard is a multilingual guardrail model series developed by the Qwen team at Alibaba Cloud.

Top Papers of The Week

1. Qwen3-Omni Technical Report

This paper presents Qwen3-Omni, a single multimodal model that maintains state-of-the-art performance across text, image, audio, and video without any degradation relative to its single-modal counterparts. Its Thinker-Talker MoE architecture integrates text, image, audio, and video processing across 119 languages. The model reduces latency with a causal ConvNet, and its submodels, including Qwen3-Omni-30B-A3B-Captioner, provide accurate captions for diverse audio inputs, publicly released under Apache 2.0 license.

2. Video Models Are Zero-Shot Learners and Reasoners

LLMs revolutionized language processing with zero-shot learning, and now this paper shows how Veo 3 is advancing video models towards a similar trajectory in vision. It can solve a broad variety of tasks it wasn’t explicitly trained for: segmenting objects, detecting edges, editing images, understanding physical properties, recognizing object affordances, simulating tool use, and more. Its visual reasoning skills indicate video models are becoming unified, generalist vision foundation models.

3. Teaching LLMs To Plan: Logical Chain-of-Thought Instruction Tuning for Symbolic Planning

This paper presents an instruction-tuning framework, PDDL-Instruct, designed to enhance LLMs’ symbolic planning capabilities through logical chain-of-thought reasoning. By mixing structured reasoning with external verification, LLMs can learn real logical skills. The approach could help models handle planning, coding, and other complex multi-step problems.

4. GAIA: A Benchmark for General AI Assistants

This paper introduces GAIA, a benchmark of 466 real-world tasks. It proposes real-world questions that require a set of fundamental abilities, such as reasoning, multimodal handling, web browsing, and general tool-use proficiency. Instead of superhuman challenges, it focuses on tasks trivial for people but difficult for models, offering a clearer test of practical assistant capabilities.

5. VCRL: Variance-Based Curriculum Reinforcement Learning for Large Language Models

This paper introduces VCRL, a curriculum reinforcement learning framework for large language models, which adjusts training sample difficulty based on reward variance. This method, tested on five mathematical benchmarks and two models, outperforms existing RL approaches by aligning more closely with human learning processes, moving from easier to more challenging tasks.

6. Towards an AI-Augmented Textbook

This paper proposes “Learn Your Way,” an AI pipeline that personalizes textbook content by grade level and interests, then transforms it into multiple representations (immersive text, narrated slides, audio lessons, mind maps) with embedded formative assessment. In a randomized study involving 60 students, the system significantly improved both immediate and three-day retention scores compared to a standard digital reader.

Quick Links

1. Meta launches pro-AI PAC, called the American Technology Excellence Project, is the company’s latest effort to combat policies it sees as harmful to the development of AI. Axios reports that Republican veteran Brian Baker and Democratic consulting firm Hilltop Public Solutions will run Meta’s new super PAC. The focus on parental control arises amid growing concerns about child safety surrounding AI tools.

2. OpenAI introduces GDPval evaluation. GDPval assesses AI on 1320 real-world tasks from 44 GDP-contributing occupations, with expert graders comparing outputs to human work. Claude Opus 4.1 achieves nearly 50% accuracy, matching or exceeding that of experts, while GPT-5 excels in accuracy, with progress tripling in a single year.

Who’s Hiring in AI

Junior AI Engineer (LLM Development and Technical Writing) @Towards AI Inc (Remote)

Software Developer 4 @Oracle (Multiple US Locations)

Gen AI Engineer @Proximate Technologies Inc. (Plano, TX, USA)

AI/ML Engineer @Greenlight Financial Technology (Remote Friendly)

Senior AI Researcher @Charles Schwab (San Francisco, CA, USA)

IT Intern — AI & Emerging Technologies @Dominion Energy (Richmond, CA, USA)

Interested in sharing a job opportunity here? Contact sponsors@towardsai.net.

Think a friend would enjoy this too? Share the newsletter and let them join the conversation.


TAI #172:OpenAI’s GDPval Shows AI Nearing Expert Parity on Real-World Work was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

High AI Accuracy. Hidden AI Bias. The AI Trap Costing Companies Millions.

Even with high accuracy, AI systems can still discriminate against users based on factors such as race, age, gender, or culture. This…

Small Language Models Are the Future of Agentic AI: Here’s Why

Why specialized SLMs under 10B parameters are replacing 175B LLMs in production AI agents — with 30x cost savings, better performance, and…

End-to-End Workflow Automation with n8n: Google Forms, Sheets, MongoDB, and AI

Well, it’s not just a hype. This is the simplest interface for building some of the most complex automation workflows. For all those who want to automate their work, use this tool. You’ll be amazed by seeing what a layer of abstraction can do.

But we’re building agents, right? Well, let’s just say we are automating an AI workflow.

So, in this article, we will build a workflow to automate the process of getting data after form submission, saving it in MongoDB, and then sending a welcome email created by LLM to a particular email address. This is going to be the most beginner-friendly article about n8n.

Setting-up n8n

We’ll start by setting up n8n on our local machine. This is quite simple. Make sure you have Node(≥18) installed.

# Check if you have Node installed or not
node -v
npm -v
# Install n8n globally
npm install n8n -g
# Command to start n8n
n8n

Now you can access it here — http://localhost:5678

Make sure to use the Chrome browser; I faced issues with Safari.

You should see an interface like this once you visit the above-mentioned URL.

There’s an orange button in the top right corner to get started. A blank workflow page should look like this —

Setting Up a Google Form with n8n

You can create a Google Form here. I have created a demo form; you can access it here (But you won’t receive any mail after filling it out).

But we can’t create a trigger on a Google Form, so we will create a trigger on a Google Sheet that stores the submitted data of the form. Make sure to enable Link to Sheets in the response tab.

Now, in the n8n workflow, click on add first step.

Upon clicking on “On row added or updated” you should see something like this —

Here, by clicking on “Select Credential”, you will get the option “Create New Credential”.

You need to collect your Client ID and Client Secret from Google Cloud Console.

First, you need to enable the Google Sheets API. Make sure you have a client created with the OAuth Redirect URL given by n8n. Remember, you also need to enable the Google Drive API.

Once you complete the setup, click on the Sign in with Google button.

Upon completing a successful setup, you should get a message showing that the setup is done.

Setting up MongoDB with n8n

You need to collect the MongoDB string from MongoDB Atlas. And add it in the n8n Connection String field. Also, add the database name in the Database field.

n8n Portal
Mongo DB Atlas

It’s as simple as that. Let’s get back to the workflow, and things will make more sense.

The Workflow

Our workflow should look something like this. We have two new components that we haven’t discussed yet: the AI Agent and Gmail.

The basic idea is to summarize an email message for the new user who just submitted the form. So, here is the flow —

  1. User submits a form, and the data is added to the sheet.
  2. A function to organize data in proper JSON format (optional)
  3. An AI Agent to summarize a welcome mail for the new users.
  4. A set to format the input data for MongoDB (Will add later)
  5. Store all the data in a MongoDB database.

Now, if everything works fine, click on the Google Sheets Trigger. You should see something like this —

Choose the Document, in our case, it is n8n test. Just below it, you’ll have an option to choose the Sheet.

Next, add a model for the AI Agent. I am gonna choose Groq. Make sure you have credentials from the groq api.

— These are the details of the language model.

You can also add a Simple Memory and Tool, just by clicking on the add button in the bottom section of the AI Agent.

But in our case, we don’t need any memory or tools. We’re gonna open the Groq Chat Model and add this prompt —

Write a greeting email to {{ $json.Name }} by saying this was a process of testing the n8n workflow.

We also need to add an output parser, as we want the output of the AI Agent to be properly formatted so that we can use it for sending emails. And this can easily be done by clicking on the AI Agent component. Make sure this is turned on —

The workflow should now look something like this —

Now, by following the same process as Google Sheets and Drive, you need to enable the Gmail API.

Once the Gmail API is ready to be used, make sure the Parameters fields are properly set up. Take a look at the image below to understand the syntax.

Finally, we need to store data in MongoDB, but this part is little tricky. Let’s start by adding a Set before sending data to MongoDB.

This section is to properly build the inputs for MongoDB. You need to add this syntax to the JSON field.

={{ 
{
name: $node["Google Sheets Trigger"].json["Name"],
email: $node["Google Sheets Trigger"].json["Email Id"],
subject: $('AI Agent').item.json.output.subject,
message: $('AI Agent').item.json.output.message
}
}}

As you can see, the above fields are taken out of multiple fields created through the previous steps of the workflow.

Once we have all the inputs ready, let’s simply put them into the MongoDB field.

And this is the entire workflow. Below is the output from my MongoDB database.

Also, here’s the snap of the mail I received in my inbox —

Make sure the workflow is activated, then only your workflow will execute automatically.

Conclusion

Well, this was just a simple implementation of the n8n workflow; the main idea was to give you a detailed overview of how multiple components can be used and integrated. Here, we tried to use multiple Google products, including the gmail api. Also, we integrated a MongoDB database. This guide should give you a great starting point for n8n.

Hope you enjoyed, feel free to add some thoughts in the comment section. Thanks :)


End-to-End Workflow Automation with n8n: Google Forms, Sheets, MongoDB, and AI was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Basic Workflow Automation with n8n

Getting started with workflow Automation


Beyond ROC-AUC and KS: The Gini Coefficient, Explained Simply

Understanding Gini and Lorenz curves for smarter model evaluation

The post Beyond ROC-AUC and KS: The Gini Coefficient, Explained Simply appeared first on Towards Data Science.

Actual Intelligence in the Age of AI

Jarom Hulet on mastering fundamentals, hiring well, and deciding what to write about next

The post Actual Intelligence in the Age of AI appeared first on Towards Data Science.

How to Build Effective Agentic Systems with LangGraph

Create AI workflows with agentic frameworks

The post How to Build Effective Agentic Systems with LangGraph appeared first on Towards Data Science.

The Machine Learning Lessons I’ve Learned This Month

September 2025: library or self-made, Ditto and Launchbar, reading widely and deeply

The post The Machine Learning Lessons I’ve Learned This Month appeared first on Towards Data Science.


How to get the Windows 11 2025 Update

Windows is essential for more than a billion people to connect, learn, play and work, and today we are announcing the availability of the Windows 11 2025 Update (also known as Windows 11, version 25H2). This year’s annual feature u

The post How to get the Windows 11 2025 Update appeared first on Windows Blog.


AIhub monthly digest: September 2025 – conference reviewing, soccer ball detection, and memory traces

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we hear about the latest research on soccer ball detection, learn about energy-based transformers, find out about memory traces in reinforcement learning, and explore some potential […]