BLOG POST
AI 日报
Humanity’s Toxic Wreckage Is Teeming With Life, Scientists Discover
Though they leach toxic chemicals, submerged explosives from World War II attract algae, mussels, and fish in high numbers.
Enterprises Beware: Agent-Washing Clouds the Future of AI
Vendors mislabel copilots as agents, raising regulatory and operational risks for firms chasing the promise of agentic AI.
The post Enterprises Beware: Agent-Washing Clouds the Future of AI appeared first on Analytics India Magazine.
LTTS, Siemens Partner for AI-led Transformation in Process Engineering & Smart Manufacturing
The deal will deliver simulation-driven automation and AI-enabled solutions for diverse sectors.
The post LTTS, Siemens Partner for AI-led Transformation in Process Engineering & Smart Manufacturing appeared first on Analytics India Magazine.
How Neysa Stands Out in the IndiaAI GPU Race
Unlike other providers focused on GPU allocation, Neysa claims to deliver an end-to-end AI cloud platform.
The post How Neysa Stands Out in the IndiaAI GPU Race appeared first on Analytics India Magazine.
The current war on science, and who’s behind it
A vaccine developer and a climate scientist walk into a bar write a book.
Why LA Comic Con thought making an AI-powered Stan Lee hologram was a good idea
“I suppose if we do it and thousands of fans… don't like it, we'll stop doing it.”
Zuckerberg hailed AI ‘superintelligence’. Then his smart glasses failed on stage | Matthew Cantor
The Meta CEO fumbled a demo of his AI Ray-Bans, giving us hope that the robots might be too dumb to take over
As humanity inches closer to an AI apocalypse, a sliver of hope remains: the robots might not work.
Such was the case last week, as Mark Zuckerberg attempted to demonstrate his company’s new AI-enabled smart glasses. “I don’t know what to tell you guys,” Zuckerberg told a crowd of Meta enthusiasts as he tried, and failed, for roughly the fourth time to hold a video call with his colleague via the glasses.
Madeline Horwath on AI chatbots and cognitive decline – cartoon
Delivery Robot Torments Disabled Man
We would be fuming.
The post Delivery Robot Torments Disabled Man appeared first on Futurism.
Report Warns That AI Is About to Make Your Boss a Panopticon Overlord
Seize the means of computation.
The post Report Warns That AI Is About to Make Your Boss a Panopticon Overlord appeared first on Futurism.
Lionsgate’s Attempt to Create Movies Using AI Has Crumbled Into Disaster
Turns out that making an AI good enough to churn out entire Hollywood films is pretty hard.
The post Lionsgate’s Attempt to Create Movies Using AI Has Crumbled Into Disaster appeared first on Futurism.
Experts Alarmed That AI Is Now Producing Functional Viruses
"We're nowhere near ready for a world in which artificial intelligence can create a working virus."
The post Experts Alarmed That AI Is Now Producing Functional Viruses appeared first on Futurism.
Zuckerberg’s AI Glasses Guy Is Named Rocco Basilico
"I grabbed my head in horror."
The post Zuckerberg’s AI Glasses Guy Is Named Rocco Basilico appeared first on Futurism.
IVF Disrupted, The Kindbody Story: The Baby Project
In this episode of IVF Disrupted: The Kindbody Story, a billionaire’s “baby project” and what it illustrates about America's unregulated fertility industry.
Harrods Warns Customers of Data Theft in Latest IT Breach
Customers of Harrods Ltd., the luxury London department store, had their personal data stolen, the latest in a string of cyberattacks and IT breaches affecting major UK businesses this year.
The Kindbody Story: E5, The Baby Project (Podcast)
A Kindbody employee discovers the company is helping an imprisoned billionaire father multiple children through surrogates and egg donors. In this episode, reporter Jackie Davalos investigates Greg Lindberg’s “baby project” and what it illustrates about America’s unregulated fertility industry.
Canada Wants To Lure Tech Workers Who Won’t Get US H-1B Visas
Canadian Prime Minister Mark Carney wants to attract employees from the technology sector who might have previously worked in the US before President Donald Trump’s new visa charges.
Our family lived in Japan for 3 years. We're back in the US now and I really miss these 3 things.
My family moved abroad for 3 years. I loved living in Japan and miss a lot of things now that we're back in the US.
For the first time in my adult life, I visited Las Vegas without gambling. These 6 activities made it the best trip yet.
I usually hit the casinos when I visit Las Vegas, but on a recent trip, I challenged myself not to gamble. These activities made the trip fun.
I spent $35 on a business-class train ticket in Malaysia. My trip was a wildly good value, from the private lounge to the tasty food.
My ride on Malaysia's KTM ETS train from Kuala Lumpur to Penang in business class was great, from the private Ruby Lounge to the onboard meal.
I often feel lost because I'm not married and have no kids. My 93-year-old great aunt gave me a freeing piece of advice.
At 37, I travel full-time and have no kids. I worry that I'm falling behind, but my great aunt, a superager, says to stop worrying.
How America's trash fuels toxic tofu in a country across the globe
Workers in Indonesia risk their lives cooking toxic tofu over furnaces powered by US plastic waste, poisoning food and communities.
My high school friends and I take yearly trips together. We're in our 30s but still make time for each other.
Even though we don't live in the same state, we still make time to travel together throughout the year. They've seen me through a lot.
A look inside Jackie Kennedy Onassis' luxurious homes, from sprawling estates to full-floor apartments
Take a closer look at the luxurious homes of Jackie Kennedy Onassis, from sprawling estates to iconic apartments.
Tech founders and execs roast Meta's new push for AI-generated reels
Tech founders took to X to dunk on Meta's new Vibes feature, calling it "AI slop" and a new way to get vulnerable users hooked on mindless content.
Starbucks is closing over 100 North American stores — here are the locations we know so far
Starbucks on Thursday said it would close 1% of its North American stores, but didn't announce which ones. Business Insider has compiled a list.
I was lucky enough to land a full-time job before graduating from college. But I wasn't prepared for the real world.
I landed a great job before graduating, but no one taught me how to handle student loans, tax deductions, retirement, and PTO. I was lost.
I rented a Model Y for my first long-distance drive. I struggled with it for all the reasons people love the EV.
Renting a Tesla Model Y for a long-distance drive presented unexpected challenges despite its cost efficiency and driver assistance technology.
Superintelligence could wipe us out if we rush into it — but humanity can still pull back, a top AI safety expert says
AI safety expert Nate Soares told BI rushing to build superintelligence is "overwhelmingly likely" to wipe us out — but said disaster can be averted.
Are sneakers with suits ever OK? BI wants to hear from you.
In this Saturday edition of Business Insider Today, we're discussing questionable fashion choices made by Team USA at the Ryder Cup's opening gala.
An AI startup founder explains why the H-1B executive order doesn't change his hiring plans
Arko C, the co-founder and CEO of the AI startup Pipeshift, told Business Insider why he's not worried about the H-1B visa fee increase.
The Tiny Team era is here
AI-powered startups are proving the Tiny Team era is here. Five founders and employees share the pros and cons of working alongside AI agents.
My grandparents say the keys to their 65-year marriage include staying independent and having a healthy social life
My grandparents have been married for 65 years. They say independence in a relationship is important, and they show gratitude in small, everyday ways.
I work full-time while caring for both my mom and kids. It can be overwhelming, but it's taught me I don't have to be in control.
Karen Lee-Coss works a full-time job, is a mother, and cares for her mom, who moved in with her family after Lee-Coss' dad died.
Trump's H-1B visa crackdown could cut US jobs instead of creating them
Higher H-1B visa costs might not boost hiring in the US, as companies might turn to workers abroad.
Small grocery stores like Aldi and Grocery Outlet are gaining ground in the grocery wars
Small-format grocery stores, such as Aldi, Grocery Outlet, and Whole Foods Daily Shop, are gaining popularity with shoppers.
The race to ultrafast delivery is on but there are big hurdles to speeding up the time to your doorstep
Delivery speeds are getting a lot faster as shoppers prize convenience for groceries and more. But there is a limit to how quick retailers can be.
Day 57: Ansible video hands-on
Over the past few days, I’ve been diving into Ansible, and it’s been a game-changer in how I understand configuration management and automation. From setting up nodes to running ad-hoc commands, Ansible has shown how powerful and simple automation can be.
But today is different — instead of just text, I explored Ansible through a hands-on video explanation. Sometimes watching things in action makes concepts stick much faster than reading about them, and this was definitely the case.
Key Highlights from the Video
🔹 How Ansible uses a master-node and inventory file to manage multiple servers.
🔹 Running ad-hoc commands like ping and uptime to quickly test connectivity and server health.
🔹 Using playbooks for more complex automation — combining multiple tasks into a repeatable script.
🔹 The importance of YAML files (hosts, buildspec.yaml, appspec.yaml) in defining tasks and environments.
Why This Matters
The big takeaway for me was how Ansible bridges simplicity with scale. Whether it’s one server or hundreds, the same commands work — making automation both approachable and extremely powerful.
Next Steps
I’ll continue building on this foundation, exploring more complex playbooks and integrating Ansible into CI/CD workflows.
Watching Ansible in action has definitely deepened my understanding — sometimes seeing is believing!
Here’s a link of the video I used.
Ethers.js Developer Guide – Practical
Ethers.js is a lightweight JavaScript library for interacting with the Ethereum blockchain. It provides:
- A unified way to connect to Ethereum nodes.
- Tools for creating and managing wallets.
- A simple API for calling smart contracts.
- Built-in utilities for handling blockchain-specific data formats (e.g.,
BigNumber
, hex strings, hashes).
Compared to alternatives like web3.js, Ethers.js is designed to be:
- Smaller (tree-shakeable, modular).
- Safer (immutability, strict typing).
- Developer-friendly (clean API surface).
Install via npm (or yarn/pnpm):
npm install ethers
import { ethers } from "ethers";
const { ethers } = require("ethers");
import { ethers } from "ethers";
async function main() {
const provider = ethers.getDefaultProvider();
const blockNumber = await provider.getBlockNumber();
console.log("Latest block:", blockNumber);
}
main();
- Providers – Connecting to Ethereum
A Provider is your connection to the Ethereum network. Think of it as a “read-only API”.
JSON-RPC Provider
const provider = new ethers.JsonRpcProvider("http://localhost:8545");
Infura
const provider = new ethers.InfuraProvider("mainnet", process.env.INFURA_API_KEY);
Alchemy
const provider = new ethers.AlchemyProvider("sepolia", process.env.ALCHEMY_API_KEY);
Etherscan (read-only)
const provider = new ethers.EtherscanProvider("mainnet", process.env.ETHERSCAN_API_KEY);
Browser Provider (MetaMask)
const provider = new ethers.BrowserProvider(window.ethereum);
await provider.send("eth_requestAccounts", []);
Recap:
Provider = blockchain connection.
Use RPC for local, Infura/Alchemy for production, MetaMask for browser.
- Wallets – Managing Keys
A Wallet represents an Ethereum account (private key + address).
Create a Random Wallet
const wallet = ethers.Wallet.createRandom();
console.log("Address:", wallet.address);
console.log("Mnemonic:", wallet.mnemonic.phrase);
Import from Private Key
const wallet = new ethers.Wallet(process.env.PRIVATE_KEY);
Import from Mnemonic
const wallet = ethers.Wallet.fromPhrase("test test test ...");
Connect Wallet to Provider
const provider = new ethers.JsonRpcProvider("https://rpc.sepolia.org");
const wallet = new ethers.Wallet(process.env.PRIVATE_KEY, provider);
Sign a Message
const signature = await wallet.signMessage("Hello Ethereum");
console.log("Signature:", signature);
Send a Transaction
const tx = await wallet.sendTransaction({
to: "0xabc123...def",
value: ethers.parseEther("0.01")
});
console.log("Transaction hash:", tx.hash);
⚡ Recap:
Wallet = identity + signer.
Always connect wallet → provider before sending transactions.
- Contracts – Interacting with Smart Contracts
A Contract object allows you to call functions defined in a smart contract.
Define a Contract
const abi = [
"function balanceOf(address) view returns (uint)",
"function transfer(address, uint) returns (bool)"
];
const tokenAddress = "0xYourTokenAddress";
const contract = new ethers.Contract(tokenAddress, abi, provider);
Read Data
const balance = await contract.balanceOf("0xabc123...");
console.log("Balance:", ethers.formatUnits(balance, 18));
Write Data (with Signer)
const signer = new ethers.Wallet(process.env.PRIVATE_KEY, provider);
const contractWithSigner = contract.connect(signer);
const tx = await contractWithSigner.transfer("0xdef456...", ethers.parseUnits("1", 18));
await tx.wait();
console.log("Transfer confirmed:", tx.hash);
Listen to Events
contract.on("Transfer", (from, to, value) => {
console.log(`Transfer: ${value} tokens from ${from} to ${to}`);
});
Recap:
Contract = interface + ABI + address.
Use provider for read calls, wallet signer for transactions.
- Utilities – Everyday Helpers
Unit Conversion
const value = ethers.parseEther("1.0");
console.log(value.toString());
console.log(ethers.formatEther(value));
BigNumber Math
const a = ethers.parseUnits("1000", 18);
const b = ethers.parseUnits("250", 18);
console.log(a.add(b).toString());
Hashing
const hash = ethers.keccak256(ethers.toUtf8Bytes("hello"));
console.log("Hash:", hash);
Address Helpers
console.log(ethers.isAddress("0xabc123..."));
console.log(ethers.getAddress("0xabc123..."));
⚡ Recap: Utilities handle ETH units, big numbers, hashes, addresses.
- Advanced Topics
Gas Estimation & Overrides
const gas = await contract.estimateGas.transfer("0xdef...", ethers.parseUnits("1", 18));
const tx = await contractWithSigner.transfer("0xdef...", ethers.parseUnits("1", 18), {
gasLimit: gas * 2n
});
Event Filters
const filter = contract.filters.Transfer(null, "0xdef...");
const events = await contract.queryFilter(filter, -1000, "latest");
console.log(events);
Security Tips
Never hardcode private keys.
Use BrowserProvider for frontend dApps.
Validate user inputs.
- Quick Recap (Cheat Sheet)
Provider = connect (JsonRpcProvider, InfuraProvider, BrowserProvider).
Wallet = manage keys + sign transactions.
Contract = interact with ABI.
Utilities = conversions, hashes, addresses.
Advanced = gas, filters, security.
Database Design: modelando bancos de dados do jeito certo
Fala, pessoal! Espero que todos estejam bem.
Desde que me tornei desenvolvedor full-stack, tenho buscado aperfeiçoar meus conhecimentos, principalmente em back-end. Nos tempos em que fazia faculdade, cheguei a estudar sobre modelagem de dados e banco de dados em si. Mas, como no início da carreira acabei indo pra área de front-end, deixei esses conhecimentos um pouco de lado.
Percebi que, ao começar projetos pessoais ou cases de estudo, eu estava um pouco travado na hora de modelar a parte do banco de dados. Muitas vezes acabava modelando errado e, com isso, prejudicava o code flow por ter que parar em um determinado momento para refatorar. Para solucionar isso, resolvi então estudar um pouco a respeito dos fundamentos da modelagem de dados. A seguir, trago os principais insights que tive em meus estudos:
Nesse primeiro momento da modelagem, não faz sentido já começar a pensar em usar alguma tecnologia referente ao banco de dados, como: MySQL, PostgreSQL, Oracle, MongoDB, etc. Se você já começa com esse pensamento, você fica enviesado a modelar de acordo com o banco de dados escolhido e isso se torna um problema por gerar uma dependência da modelagem ao banco de dados. E se futuramente o banco de dados precisar ser alterado por algum motivo? Dessa forma toda a modelagem feita fica comprometida e precisará ser refeita.
Vale ressaltar que antes de começar o processo de modelagem, é necessário fazer o levantamento de requisitos referentes ao projeto. Esses requisitos servirão como um guia para a definição de entidades e seus relacionamentos.
Modelo conceitual
É a etapa onde definimos as entidades e como elas se relacionam (1:1, 1:N, N:N). Nesse estágio, os principais elementos são entidades, atributos e os relacionamentos entre eles.
O objetivo é capturar os requisitos e conceitos do negócio de uma forma compreensível, com uma representação não muito técnica, de fácil entendimento entre todas as partes interessadas.
Dica: Se não é um projeto pra uma empresa grande ou há poucas partes não técnicas envolvidas, você pode e deve pular essa etapa e começar diretamente pelo Modelo Lógico para economizar tempo da modelagem.
Modelo Lógico
É a etapa onde definimos os atributos (dados) das nossas entidades, incluindo as nossas chaves primárias e estrangeiras. Nessa etapa, o modelo lógico traduz a modelagem conceitual em uma estrutura mais detalhada e específica.
O objetivo é criar um modelo técnico para guiar o desenvolvimento do banco de dados, mas ainda abstrato o bastante para ser independente de tecnologia.
Deixarei abaixo algumas ferramentas que podem nos auxiliar na modelagem:
Modelo Físico
É a etapa final, onde aplicamos características do banco de dados no modelo lógico definido previamente, ou seja, é a etapa de implementação de uma tecnologia referente ao banco de dados. Aqui, definimos os tipos de dados e como serão organizados.
O objetivo é otimizar o modelo para uma tecnologia escolhida, garantindo um desempenho eficiente do banco de dados. Nessa etapa que iremos analisar qual é o melhor tipo de banco de dados que atenderá melhor o nosso projeto, seja SQL ou NoSQL.
Resumindo:
- A modelagem conceitual é a etapa onde definimos as entidades e como elas se relacionam (1:1, 1:N, N:N);
- A modelagem lógica é a etapa onde definimos os atributos (dados) das nossas entidades, incluindo as nossas chaves primárias e estrangeiras;
- A modelagem física é a etapa final, onde aplicamos características do banco de dados no modelo lógico definido previamente, ou seja, é a etapa de implementação de uma tecnologia referente ao banco de dados;
A modelagem conceitual estabelece os requisitos do negócio, a modelagem lógica traduz esses requisitos em uma estrutura de dados compreensível e a modelagem física finaliza a implementação detalhada no ambiente de banco de dados específico.
Entre os desafios mais comuns estão as mudanças frequentes nos requisitos do projeto e a escalabilidade do banco de dados.
Mudança de Requisitos
As mudanças nos requisitos são inevitáveis em qualquer ambiente de negócios dinâmico. À medida que as necessidades dos usuários evoluem ou novos requisitos surgem, os modelos de dados devem ser adaptados para refletir essas mudanças. Isso pode levar a desafios adicionais, como manter a consistência entre diferentes versões dos modelos e garantir que as alterações não comprometam a integridade dos dados existentes.
Para isso, técnicas do DDD (Domain Driven Design) podem nos ajudar a entender melhor essas mudanças no projeto. O DDD nos ajuda a entender as regras de negócio. Os requisito, regras e domínio do negócio são área correlatas e andam juntas durante toda a vida do projeto. Para um bom levantamento de requisitos e adaptação do mesmo ao longo do projeto, procure entender mais sobre o domínio do negócio, quem são os especialistas do domínio e como lidar com a linguagem ubíqua.
Escalabilidade do Banco de Dados
A escalabilidade é outra consideração importante na modelagem de dados, especialmente em sistemas que lidam com grandes volumes de dados ou que precisam suportar um número crescente de usuários. Modelos de dados que não são escaláveis podem levar a problemas de desempenho, tempo de resposta lento e dificuldades na manutenção do sistema.
Para isso, a partir da etapa de modelagem física, podemos e devemos pensar em Access Patterns (Padrões de Acesso) para definirmos índices em nosso banco de dados. Após uma análise de como é a frequência de consultas, podemos definir alguns padrões de acesso, como por exemplo:
- Listar todos os usuários;
- Buscar usuário por e-mail;
- Listar os pedidos de um usuário;
Através desses padrões definidos, podemos implementar índices em informações que são consultadas várias vezes em nosso banco de dados.
Os indices são estruturas auxiliares que aceleram a recuperação de dados em tabelas. Podemos fazer a analogia de um livro. Os índices são como informações presentes no sumário, que apontam para a localização de informações específicas, ao invés de de fazer com que o banco de dados vasculhe cada linha da tabela em busca daquela informação. Dessa forma, o desempenho da consulta é otimizado, reduzindo o tempo de resposta e diminuindo a carga computacional do servidor.
- O levantamento de requisitos deve ser feito antes da modelagem do banco de dados. Para compreender melhor os requisitos e regras de negócio do projeto, técnicas de DDD (Domain Driven Design) são bem-vindas;
- A modelagem deve ser feita de forma independente do banco de dados. Existem 3 tipos de modelagem: conceitual, lógica e física;
- Somente na modelagem física é que devemos nos preocupar com o banco de dados, seja ele do tipo SQL ou NoSQL, seja MySQL, PostgreSQL, Oracle, MongoDB, ou qualquer outro;
- A análise de Access Patterns (Padrões de Acesso) e a implementação de índices na etapa da modelagem física, pode ajudar na escalabilidade do nosso banco de dados conforme o crescimento do projeto;
Deixando claro que isso é um recorte dos meus estudos!
Se algum especialista em cassino notar que eu joguei os dados do jeito errado, por favor, não aposte contra mim — apenas siga para a próxima roleta.
Todas as reflexões aqui foram feitas enquanto eu consumia esses materiais:
- Curso Database Design: modelando bancos de dados do jeito certo – JSTack
- O que é e para que serve a modelagem de dados? – Blog Alura
Nos vemos no Github, até breve! 🚀
HI
Hi, my name is Ziad, I'm a FCI student and nice to meet you ❤
Learning about the history of the internet today. Really interesting stuff. I never knew that TCP/IP dates all the way back to the 1970s.
We Launched! A Look Inside Our New AI FinTech Platform.
Blazer AI: Intelligent Student Handbook and Website Assistant
This is a submission for the Heroku "Back to School" AI Challenge
Blazer AI: Intelligent Student Handbook Assistant
Blazer AI is an AI-powered chatbot designed specifically for AB Tech students and faculty to quickly find information buried within the school's extensive website and student handbook. The application solves the common problem of students and faculty struggling to navigate institutional resources and find answers to academic questions.
Built in just 2 weeks while managing 5 classes, Blazer AI combines vector similarity search with intelligent query routing to provide contextually relevant answers. The system can handle both specific course inquiries (like "Tell me about CSC-151") and general questions (like "What are the graduation requirements?") by automatically detecting query types and routing them through appropriate search mechanisms.
The application transforms traditionally static institutional knowledge into an interactive, conversational experience that's available 24/7 for student and faculty support.
Student Success - Blazer AI directly addresses student success by making institutional knowledge more accessible, reducing time spent searching for academic information, and providing instant answers to common student questions about courses, policies, and campus resources.
Educator Empowerment - The same that can be said about the student success category towards Blazer AI can be said about the educator empowerment potential. All of my instructors have a really hard time referring to specific sections in the handbook for policies, as well as referring to specific webpages to answer questions, due to our school website being over 3000 pages. Educator empowerment may be more applicable, as the typical student may ask questions related only to their program of study or particular classes. In contrast, now educators and faculty have instant contextual access to all of the website and handbook data.
Live Application: https://blazer-ai-abt.com/
Source Code:
- Frontend: https://github.com/echtoplasm/handBooky-frontend
- Backend: https://github.com/echtoplasm/handBooky-backend
Screenshots
Homepage before chatting:
Normal questions being asked:
Demo GIF:
GIF showing the React state management and context being passed between two components to ask questions of the chatbot
Key Features Demonstrated
- Intelligent query routing between vector and non-vector search
- Real-time AI responses with source citations
- Mobile-responsive design optimized for student use
- Sample questions tailored to common student inquiries
Technical Architecture
- Frontend: React application deployed via Heroku Static Buildpack
- Backend: Node.js/Express API with PostgreSQL vector database
- AI Integration: Claude 3.5 Haiku via Heroku Managed Inference
Heroku Managed Inference: Integrated Claude 3.5 Haiku for natural language processing and response generation. The AI agent is configured with a comprehensive system prompt that restricts responses to school-related topics and ensures citations are included.
Heroku Embedding Model API: Used for generating vector embeddings from user queries, enabling semantic similarity search through the knowledge base.
pgvector Integration: Leveraged PostgreSQL with pgvector extension for storing and querying text embeddings, allowing the system to find contextually relevant information even when exact keyword matches aren't available.
Agent Coordination
The system employs intelligent routing logic:
- Course Code Detection: Regex pattern matching identifies specific course queries (e.g., "CSC-151") and routes them to direct database lookups
- Vector Search: General queries generate embeddings and perform similarity searches through the vector database
- Context Assembly: Retrieved information is structured with source metadata and fed to the AI agent for natural language response generation
Multi-Agent Architecture
- Query Router: Analyzes incoming messages to determine search strategy
- Vector Search Agent: Handles semantic similarity searches using embeddings
- Course Lookup Agent: Manages direct database queries for specific course codes
- Response Generator: Claude 3.5 Haiku processes context and generates student-friendly responses
Key Technologies
- PERN Stack: PostgreSQL, Express.js, React, Node.js
- Vector Database: PostgreSQL with pgvector for similarity search
- AI Integration: Heroku Managed Inference with Claude 3.5 Haiku
Technical Challenges Solved
Intelligent Query Routing: Developed a hybrid approach that combines exact course code matching with semantic vector search, optimizing both accuracy and response time.
Data Processing Pipeline: Built utilities to process PDF documents and website content into structured, searchable chunks with embeddings, handling inconsistent institutional data formats.
Context Optimization: Implemented context assembly that includes source metadata, prerequisites, and corequisites to provide comprehensive responses while staying within token limits.
Responsive Design: Created a mobile-first interface optimized for student use patterns, recognizing that most students access information via smartphones.
Development Timeline: Solo-developed and completed in 2 weeks while maintaining a full course load, demonstrating efficient project management and focused technical implementation.
Data Sources: Integrated AB Tech's student handbook (PDF) and website sitemap data, creating a comprehensive knowledge base covering academic policies, course information, and campus resources.
The application successfully transforms static institutional documentation into an interactive, intelligent assistant that understands student intent and provides relevant, cited information in a conversational format.
Future Implementations
The future implementation of this application would allow users to drag and drop their own sitemap data or handbook PDF data, and the app would parse and embed it for them. This would allow any institution to have an RAG agent implemented and ready to answer any of their questions related to their institution's resources. This was the original vision for this application, but due to time constraints and being a solo developer, I had to scale back the ambition a bit.
Scaling to 300K+ Records Daily: How We Handle High Volume Data Processing with Lumen & MySQL
Building a lean, mean data processing machine that handles 100 I/O operations per second without breaking a sweat
When your application suddenly needs to process hundreds of thousands of records daily with peak loads hitting 100 I/O operations per second, you quickly learn that standard CRUD operations won't cut it. Here's how we transformed our Lumen application into a high performance data processing powerhouse.
Our monitoring system processes 300,000+ data records daily, generating complex reports and exports while maintaining sub-second response times. The system handles everything from real-time aggregations to massive CSV exports all while keeping memory usage under control.
JSON Columns with Generated Virtual Columns
Instead of creating multiple tables with complex joins, we leveraged MySQL's JSON capabilities with a twist:
// Migration: Create virtual columns for frequently queried JSON fields
Schema::table('data_xxx', function (Blueprint $table) {
$table->string('feedback_extracted')->virtualAs(
"JSON_UNQUOTE(JSON_EXTRACT(content_data, '$.feedback'))"
)->index();
$table->decimal('amount_extracted', 15, 2)->virtualAs(
"CAST(JSON_EXTRACT(content_data, '$.amount') AS DECIMAL(15,2))"
)->index();
});
Why this works:
- Virtual columns are computed on the fly but can be indexed
- Eliminates need for complex joins
- Maintains data flexibility while enabling fast queries
Strategic Indexing
// Composite indexes for common query patterns
Schema::table('data_xxx', function (Blueprint $table) {
$table->index(['branch', 'visit_date', 'visit_type']);
$table->index(['personnel_id', 'visit_date', 'status']);
$table->index(['visit_type', 'status', 'feedback_extracted']);
});
Avoiding N+1 with Smart Aggregation
Instead of loading relations, we aggregate at the database level:
public function getDataSummary($filters)
{
return DB::table('data_xxx')
->select([
'branch',
DB::raw('SUM(CASE WHEN status = "COMPLETED" THEN 1 ELSE 0 END) as completed'),
DB::raw('SUM(CASE WHEN status = "PLANNED" THEN 1 ELSE 0 END) as planned'),
DB::raw('AVG(CAST(JSON_EXTRACT(content_data, "$.score") AS DECIMAL)) as avg_score')
])
->where('visit_date', $filters['date'])
->groupBy('branch')
->get();
}
Generator Powered Data Processing
For large datasets, we use PHP generators to maintain constant memory usage:
public function processLargeDataset($filters): \Generator
{
$query = DB::table('data_xxx')
->where('visit_date', '>=', $filters['start_date'])
->where('visit_date', '<=', $filters['end_date'])
->orderBy('id');
foreach ($query->lazy(2000) as $record) {
yield $this->transformRecord($record);
}
}
// Usage
foreach ($this->processLargeDataset($filters) as $processedRecord) {
// Memory stays constant regardless of dataset size
$this->handleRecord($processedRecord);
}
Memory-Efficient CSV Generation
Our export system handles massive datasets while keeping memory usage under 50MB:
public function exportToCSV($filters): string
{
// Create temporary file
$tempFile = tmpfile();
$tempPath = stream_get_meta_data($tempFile)['uri'];
// Write headers
fputcsv($tempFile, ['Date', 'Branch', 'Personnel', 'Customer', 'Result']);
// Stream data in chunks
foreach ($this->getExportData($filters) as $record) {
fputcsv($tempFile, [
$record['visit_date'],
$record['branch_name'],
$record['personnel_name'],
$record['customer_name'],
$record['visit_result']
]);
}
// Upload to storage
$finalPath = "exports/data_" . date('Y-m-d_H-i-s') . ".csv";
Storage::put($finalPath, fopen($tempPath, 'r'));
fclose($tempFile);
return $finalPath;
}
private function getExportData($filters): \Generator
{
$query = DB::table('data_xxx')
->select([
'visit_date', 'branch_name', 'personnel_name',
'customer_name', 'visit_result'
])
->where('visit_date', $filters['date'])
->orderBy('id');
foreach ($query->lazy(2000) as $record) {
yield (array) $record;
}
}
Background Processing with Chunked Operations
For time intensive operations, we use job queues with intelligent chunking:
public function processInBackground($requestData)
{
// Create tracking record
$exportLog = $this->createExportLog($requestData);
// Queue the processing job
Queue::push(new ProcessDataExport($exportLog->id, $requestData));
return $exportLog;
}
// In the job class
public function handle()
{
$startTime = microtime(true);
foreach ($this->getDataInChunks() as $chunk) {
$this->processChunk($chunk);
// Prevent memory leaks and timeouts
if (microtime(true) - $startTime > 300) { // 5 minutes
Queue::push(new ProcessDataExport($this->logId, $this->remainingData));
return;
}
}
$this->markAsCompleted();
}
Smart Cache Invalidation
public function getCachedSummary($filters)
{
$cacheKey = 'summary_' . md5(serialize($filters));
// For today's data, cache for 30 minutes
// For historical data, cache for 24 hours
$ttl = $filters['date'] === date('Y-m-d') ? 1800 : 86400;
return Cache::remember($cacheKey, $ttl, function () use ($filters) {
return $this->generateSummary($filters);
});
}
Before optimization:
- Memory usage: 500MB+ for large exports
- Export time: 5+ minutes for 100K records
- Database CPU: 80%+ during peak hours
After optimization:
- Memory usage: <50MB consistently
- Export time: 30 seconds for 100K records
- Database CPU: <30% during peak hours
- Response time: <200ms for most queries
Key Takeaways
- JSON columns + virtual indexes eliminate complex joins while maintaining query performance
- PHP generators keep memory usage constant regardless of dataset size
- Strategic chunking prevents timeouts and resource exhaustion
- Proper indexing strategy is crucial for high volume operations
- Stream processing beats loading everything into memory
The beauty of this approach is its simplicity no complex technology, no exotic databases, just well optimized PHP and MySQL doing what they do best.
Test d'Utilisabilité
Le Test d'Utilisabilité (Usability Testing en anglais) est une méthode d'évaluation de l'interface d'un produit (site web, application mobile, logiciel, etc.) qui consiste à observer de vrais utilisateurs effectuer des tâches spécifiques dans un environnement contrôlé.
L'objectif est d'identifier les problèmes d'utilisabilité, c'est-à-dire les zones où les utilisateurs se perdent, rencontrent des erreurs ou sont frustrés, afin d'améliorer l'efficacité, l'efficience et la satisfaction de l'expérience utilisateur (UX).
Exemple réel de Test d'Utilisabilité
Un exemple classique se déroule sur un site de commerce électronique.
Scénario de Test : Un participant, sélectionné pour représenter le client type, se voit attribuer une tâche réaliste
Tâche : "Vous avez besoin d'acheter un cadeau pour l'anniversaire d'un ami. Rendez-vous sur le site et trouvez une montre connectée noire de moins de 150 €, ajoutez-la à votre panier, puis allez jusqu'à la page de confirmation de commande (sans la finaliser)."
Observations et Résultats
Pendant que l'utilisateur exécute cette tâche, les chercheurs observent et enregistrent son comportement, notamment :
Le taux de réussite de la tâche : L'utilisateur a-t-il réussi à trouver et à ajouter le produit ?
Le temps de complétion de la tâche : Combien de temps lui a-t-il fallu ?
Les erreurs et l'hésitation : A-t-il cliqué au mauvais endroit ? A-t-il mis du temps à trouver le filtre de prix ou le bouton d'ajout au panier ?
Problème réel identifié : L'utilisateur a réussi à trouver le produit, mais a passé 45 secondes à chercher la fonction de filtrage par prix car le bouton était mal étiqueté ("Trier par" au lieu de "Filtres") et visuellement peu visible.
Action et Amélioration : Suite à cette observation et à d'autres, l'équipe a modifié le libellé en "Filtres (Prix, Couleur, Marque)" et a augmenté le contraste du bouton.
Ce simple changement, basé sur un test d'utilisabilité, peut réduire le temps de recherche et augmenter la probabilité que les clients trouvent ce qu'ils cherchent, améliorant ainsi le taux de conversion du site.
AI: Understanding the Intelligence Revolution Shaping Our Future
AI: Understanding the Intelligence Revolution Shaping Our Future
Artificial Intelligence (AI) is no longer a concept confined to science fiction novels. It's a tangible, rapidly evolving force that is fundamentally reshaping our world, from how we work and communicate to how we solve complex problems. From powering personalized recommendations on streaming services to driving autonomous vehicles, AI is deeply embedded in our daily lives, often without us even realizing it. But what exactly is AI, and how is this intelligence revolution impacting our present and future?
In this post, we'll demystify AI, explore its diverse applications, examine its profound impact on various sectors, and consider the exciting opportunities and critical challenges that lie ahead. Join us as we journey into the heart of the intelligence revolution.
At its core, Artificial Intelligence refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
There are several main branches of AI:
- Machine Learning (ML): A subset of AI that enables systems to learn from data, identify patterns, and make decisions with minimal human intervention. Think of spam filters or predictive text.
- Deep Learning (DL): A more advanced form of ML that uses artificial neural networks inspired by the human brain. This allows for complex pattern recognition, crucial for tasks like image and speech recognition.
- Natural Language Processing (NLP): Allows computers to understand, interpret, and generate human language. Virtual assistants like Siri and Alexa are prime examples.
- Computer Vision: Enables computers to "see" and interpret visual information from the world, used in facial recognition, medical imaging, and self-driving cars.
- Robotics: Integrates AI with physical machines to perform tasks autonomously or semi-autonomously in diverse environments.
These interconnected fields are pushing the boundaries of what machines can achieve, leading to unprecedented levels of automation and insight.
The influence of AI is pervasive, touching nearly every sector and aspect of our daily existence. Its ability to process vast amounts of data, identify trends, and automate repetitive tasks offers unparalleled efficiency and innovation.
- Healthcare: AI is revolutionizing diagnostics, drug discovery, personalized treatment plans, and even robotic surgery. It helps doctors analyze medical images more accurately and predict disease outbreaks.
- Finance: Fraud detection, algorithmic trading, credit scoring, and personalized financial advice are all powered by AI, leading to more secure and efficient financial systems.
- Retail & E-commerce: AI drives personalized product recommendations, optimizes supply chains, enhances customer service through chatbots, and analyzes consumer behavior for better marketing strategies.
- Manufacturing: From predictive maintenance that prevents equipment failure to intelligent robots on assembly lines, AI is optimizing production processes, reducing waste, and improving safety.
- Transportation: Self-driving cars, intelligent traffic management systems, and optimized logistics routes are transforming how we move goods and people, aiming for greater safety and efficiency.
- Education: AI offers personalized learning experiences, intelligent tutoring systems, and automated grading, adapting to individual student needs and making education more accessible.
Beyond these industries, AI enhances our personal lives through smart home devices, virtual assistants, content recommendation engines, and advanced security systems, making our environments more intelligent and responsive.
While the benefits of AI are undeniable, its rapid advancement also brings significant challenges and ethical considerations that demand careful attention.
- Ethical Concerns: Issues like data privacy, algorithmic bias (where AI systems perpetuate or amplify societal biases present in their training data), and accountability for AI decisions are critical. Developing AI responsibly requires transparency, fairness, and human oversight.
- Job Displacement: Automation powered by AI may lead to job displacement in certain sectors, necessitating workforce retraining and adaptation strategies to ensure a just transition.
- Security Risks: As AI systems become more powerful, they also become potential targets for malicious actors, raising concerns about cybersecurity and the misuse of AI technologies.
- Complexity and Control: Understanding and controlling highly complex AI systems, especially those that learn and evolve autonomously, presents a unique challenge for researchers and policymakers.
Despite these challenges, the opportunities presented by AI are immense. Continued research and development promise breakthroughs in fields like climate science, disease eradication, and personalized human augmentation. The key lies in developing "Responsible AI" – systems designed with human values, safety, and ethical principles at their core. Engaging in open dialogue, fostering interdisciplinary collaboration, and establishing robust regulatory frameworks will be crucial to harnessing AI's potential for collective good.
Artificial Intelligence is not just a technological trend; it's a fundamental shift in how we interact with information, automate tasks, and solve problems previously considered insurmountable. From its foundational concepts in machine learning to its transformative impact across global industries, AI is unequivocally shaping our future.
Understanding AI is no longer a niche interest; it's a necessity for anyone looking to navigate the modern world. As AI continues to evolve, our ability to harness its power responsibly, address its challenges proactively, and innovate ethically will define the success of this intelligence revolution.
What are your thoughts on AI's impact, or how are you integrating AI into your world? Share your perspectives and join the conversation about building a smarter, more efficient, and ethically sound future.
AI #ArtificialIntelligence #MachineLearning #DeepLearning #TechFuture #Innovation #DigitalTransformation #AIethics #FutureOfWork #AIstrategy
Per-Object Permissions for Elasticsearch Lists in Django Websites
Elasticsearch improves the performance of filterable and searchable list views, reducing load times from several seconds to about half a second. It stores list view details from various relations denormalized in a JSON-like structure.
In the Django-based system I’ve been working on, we use django-guardian to set per-object permissions for users or roles managing various items. This means you not only need to check whether a user has general permission to view, change, or delete a type of object, but also whether they have permission to access specific objects.
A major challenge arises when using Elasticsearch for authorized views based on user permissions – how can we check permissions without slowing down the listing too much?
Here are a few options I compared:
- Check all object UUIDs the user can access via django-guardian, then pass those UUIDs to the Elasticsearch search query. This might work with fewer than 100 items, but it doesn’t scale.
- Filter the Elasticsearch list first, and then check each item’s UUID against user permissions. With thousands of search results, permission checks become too slow. If I check permissions only for the first page, pagination data becomes inaccurate.
- Create a user-permission Elasticsearch index with all item UUIDs accessible to the user, and filter the list by looking up those UUIDs. This makes updating the index tricky, especially for admins and superusers.
- For each item, store the list of user IDs and group IDs that can view it, then check the current user against those IDs in the list view. This is the approach I chose, since typically only a handful of users and groups need access to any given item.
Below is the code snippet that implements the last approach.
We use django-elasticsearch-dsl for indexing Django models in Elasticsearch. The Elasticsearch index document for an item with user IDs and group IDs can look like this:
# items/documents.py
from django.conf import settings
from django_elasticsearch_dsl.registries import registry
from django_elasticsearch_dsl import Document, fields
from guardian.shortcuts import get_users_with_perms, get_groups_with_perms
from .models import Item
@registry.register_document
class ItemDocument(Document):
users_can_view = fields.KeywordField(multi=True)
users_can_change = fields.KeywordField(multi=True)
users_can_delete = fields.KeywordField(multi=True)
groups_can_view = fields.KeywordField(multi=True)
groups_can_change = fields.KeywordField(multi=True)
groups_can_delete = fields.KeywordField(multi=True)
class Index:
name = "items"
settings = {
"number_of_shards": 1,
"number_of_replicas": 0,
}
class Django:
model = Item
fields = [
"uuid",
"title",
"intro",
"created_at",
"updated_at",
]
queryset_pagination = 5000
def prepare(self, instance):
data = super().prepare(instance)
data["users_can_view"] = []
data["users_can_change"] = []
data["users_can_delete"] = []
for user, permissions in get_users_with_perms(
item,
attach_perms=True,
with_superusers=True,
with_group_users=False,
only_with_perms_in=["view_item", "change_item", "delete_item"],
).items():
if "view_item" in permissions:
data["users_can_view"].append(user.pk)
if "change_item" in permissions:
data["users_can_change"].append(user.pk)
if "delete_item" in permissions:
data["users_can_delete"].append(user.pk)
data["groups_can_view"] = []
data["groups_can_change"] = []
data["groups_can_delete"] = []
for group, permissions in get_groups_with_perms(
item, attach_perms=True
).items():
for perm in permissions:
if perm == "view_item":
data["groups_can_view"].append(group.pk)
elif perm == "change_item":
data["groups_can_change"].append(group.pk)
elif perm == "delete_item":
data["groups_can_delete"].append(group.pk)
return data
Next, we need a utility class for paginating Elasticsearch indexes in a way that’s compatible with Django’s default queryset pagination:
# items/utils.py
class ElasticsearchPage:
"""
Django Paginator-compatible interface for Elasticsearch search results.
"""
def __init__(self, results, total_count, page_number, items_per_page):
self.object_list = results
self.total_count = total_count
self.number = page_number
self.paginator = type(
"Paginator",
(),
{
"count": total_count,
"num_pages": (total_count + items_per_page - 1) // items_per_page,
"per_page": items_per_page,
},
)()
def has_previous(self):
return self.number > 1
def has_next(self):
return self.number < self.paginator.num_pages
def has_other_pages(self):
return self.paginator.num_pages > 1
def previous_page_number(self):
return self.number - 1 if self.has_previous() else None
def next_page_number(self):
return self.number + 1 if self.has_next() else None
Finally, the list view checks user IDs and group IDs in the index against the current user’s ID and group memberships:
# items/views.py
from .utils import ElasticsearchPage
from .documents import ItemDocument
@login_required
def item_list(request):
user_group_pks = list(request.user.groups.values_list("pk", flat=True))
search_obj = ItemDocument.search()
perm_filter = Q(
"bool",
should=[
Q("term", users_can_view=request.user.pk),
Q("terms", groups_can_view=user_group_pks),
],
minimum_should_match=1,
)
search_obj = search_obj.query("bool", must=[perm_filter])
# more search and filtering go here...
items_per_page = int(request.GET.get("items_per_page", 24))
page_number = int(request.GET.get("page", 1))
offset = (page_number - 1) * items_per_page
total_count = search_obj.count()
search_obj = search_obj[offset : offset + items_per_page]
search_results = search_obj.execute()
page = ElasticsearchPage(
results=search_results,
total_count=total_count,
page_number=page_number,
items_per_page=items_per_page,
)
context = {
"page": page,
"items_per_page": items_per_page,
}
return render(request, "items/item_list.html", context)
At this point, it’s important to update the index not only when item details change, but also when permissions change.
This can be done by calling the following in the relevant views or form save methods:
from django_elasticsearch_dsl.registries import registry
registry.update(item)
Using django-guardian to pre-process or post-process Elasticsearch-filtered lists is inefficient. Instead, permissions should exist directly in the Elasticsearch index. Storing user IDs and group IDs in the items themselves is more practical. Just make sure Elasticsearch is properly secured with SSL/TLS and authentication (username and password) to protect the data from tampering.
How Can College Students Make Money Online or Find Internships? 💸🎓
Hey dev.to community!
Hope you all are doing well in your lives!!
I’m a college student looking to explore ways to earn some real money while studying. Balancing college and finances can be tough, so I’m curious about how other students or bloggers have managed it.
I’m particularly interested in:
Online side hustles – anything that actually pays real money (freelancing, content creation, microjobs, etc.)
Internships or work opportunities – both paid and valuable experience-wise
So here’s where I need your help:
Have you or someone you know successfully earned money while in college?
What online side hustles actually worked for you?
Do you know of internships or platforms where students like me can join and earn or gain experience?
I’d love to hear your stories, tips, and suggestions. Even small pieces of advice can go a long way!
Also, if there’s anything out there that’s currently open for college students like me, please drop a link or suggestion—I’m eager to start right away
Thanks in advance, dev.to! Let’s help students turn their free time into something productive 💡
Pixel Buds Pro 2 get Adaptive Audio, gesture controls and more in latest update
Google first teased some enticing upgrades for its Pixel Buds Pro 2 during the Made by Google event in August. More than a month later, Google is finally rolling out the update that makes its wireless earbuds earn the Pro label.
The Pixel Buds Pro 2 now get an Adaptive Audio feature in the Active Noise Control section of the Pixel Buds app. This ANC mode automatically adjusts the volume depending on your surrounding environment, balancing between hearing your music or podcasts and the world around you. If you want to drown out the outside world a little more, the Pixel Buds Pro 2 now also have the Loud Noise Protection feature, which can detect and reduce any sudden loud noises, like a passing ambulance siren or construction work. While these two features are already found in Apple's AirPods Pro 3, they're a welcome addition to the more affordable Pixel Buds Pro 2.
For anyone who frequently uses Gemini Live, you'll notice that the AI assistant will be able to hear you better in noisy environments thanks to advanced audio processing that prioritizes your voice and eliminates background noise. For a truly hands-free experience, the update even adds gesture controls that let Pixel Buds Pro 2 users nod their head to answer a call or start dictation for a text reply and shake their head to decline a call or dismiss a text. Google is rolling out its 4.467 update to its users gradually, which takes about 10 minutes to download and another 10 minutes to install.
This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/pixel-buds-pro-2-get-adaptive-audio-gesture-controls-and-more-in-latest-update-155116813.html?src=rssThe best October Prime Day deals you can get today: Early sales on gear from Apple, Anker, JBL, Shark and more
October Prime Day will be here soon on October 7 and 8, but as to be expected, you can already find some decent sales available now. Amazon always has lead-up sales in the days and weeks before Prime Day, and it’s wise to shop early if you’re on the hunt for something specific and you see that item at a good discount.
Prime Day deals are typically reserved for subscribers, but there are always a few that anyone can shop. We expect this year to be no exception, and we’re already starting to see that trend in these early Prime Day deals. These are the best Prime Day deals you can get right now ahead of the event, and we’ll update this post with the latest offers as we get closer to October Prime Day proper.
Anker Nano 5K ultra-slim power bank (Qi2, 15W) for $46 (16 percent off): A top pick in our guide to the best MagSafe power banks, this super-slim battery is great for anyone who wants the convenient of extra power without the bulk. We found its proportions work very well with iPhones, and its smooth, matte texture and solid build quality make it feel premium.
Leebein 2025 electric spin scrubber for $40 (43 percent off, Prime exclusive): This is an updated version of my beloved Leebein electric scrubber, which has made cleaning my shower easier than ever before. It comes with seven brush heads so you can use it to clean all kinds of surfaces, and its adjustable arm length makes it easier to clean hard-to-reach spots. It's IPX7 waterproof and recharges via USB-C.
Apple Mac mini (M4) for $499 $100 off): If you prefer desktops over laptops, the upgraded M4 Mac mini is one that won’t take up too much space, but will provide a ton of power at the same time. Not only does it come with an M4 chipset, but it also includes 16GB of RAM in the base model, plus front-facing USB-C and headphone ports for easier access.
Jisulife Life7 handheld fan for $25 (14 percent off, Prime exclusive): This handy little fan is a must-have if you life in a warm climate or have a tropical vacation planned anytime soon. It can be used as a table or handheld fan and even be worn around the neck so you don't have to hold it at all. Its 5,000 mAh battery allows it to last hours on a single charge, and the small display in the middle of the fan's blades show its remaining battery level.
Blink Mini 2 security cameras (two-pack) for $35 (50 percent off): Blink makes some of our favorite security cameras, and the Mini 2 is a great option for indoor monitoring. It can be placed outside with the right weatherproof adapter, but since it needs to be plugged in, we like it for keeping an eye on your pets while you're away and watching over entry ways from the inside.
Apple Watch Series 11 for $389 ($10 off): The latest flagship Apple Watch is our new pick for the best smartwatch you can get, and it's the best all-around Apple Watch, period. It's not too different from the previous model, but Apple promises noticeable gains in battery life, which will be handy for anyone who wants to wear their watch all day and all night to track sleep.
Apple MacBook Air (13-inch, M4) for $799 (20 percent off): Our top pick for the best laptop for most people, the latest MacBook Air is impressively thin and light without skimping on performance. The M4 chipset is powerful enough to handle everyday tasks without breaking a sweat, plus some gaming and labor-intensive work. It has a comfortable keyboard, luxe-feeling trackpad and an excellent battery life.
Apple iPad (A16) for $299 ($50 off): The new base-model iPad now comes with twice the storage of the previous model and the A16 chip. That makes the most affordable iPad faster and more capable, but still isn't enough to support Apple Intelligence.
Apple iPad Air (11-inch, M3) for $449 ($150 off): The only major difference between the latest iPad Air and the previous generation is the addition of the faster M3 chip. We awarded the new slab an 89 in our review, appreciating the fact that the M3 chip was about 16 percent faster in benchmark tests than the M2. This is the iPad to get if you want a reasonable amount of productivity out of an iPad that's more affordable than the Pro models.
Samsung EVO Select microSD card (256GB) for $23 (15 percent off): This Samsung card has been one of our recommended models for a long time. It's a no-frills microSD card that, while not the fastest, will be perfectly capable in most devices where you're just looking for simple, expanded storage.
Anker Soundcore Select 4 Go speaker for $26 (26 percent off): This small Bluetooth speaker gets pretty loud for its size and has decent sound quality. You can pair two together for stereo sound as well, and its IP67-rated design will keep it protected against water and dust.
Roku Streaming Stick Plus 2025 for $29 (27 percent off): Roku makes some of the best streaming devices available, and this small dongle gives you access to a ton of free content plus all the other streaming services you could ask for: Netflix, Prime Video, Disney+, HBO Max and many more.
Amazon Fire TV Stick 4K Max for $40 (33 percent off): Amazon's most powerful streaming dongle supports 4K HDR content, Dolby Vision and Atmos and Wi-Fi 6E. It also has double the storage of cheaper Fire TV sticks.
JBL Go 4 portable speaker for $40 (20 percent off): The Go 4 is a handy little Bluetooth speaker that you can take anywhere you go thanks to its small, IP67-rated design and built-in carrying loop. It'll get seven hours of playtime on a single charge, and you can pair two together for stereo sound.
Anker Soundcore Space A40 for $45 (44 percent off): Our top pick for the best budget wireless earbuds, the Space A40 have surprisingly good ANC, good sound quality, a comfortable fit and multi-device connectivity.
Anker MagGo 10K power bank (Qi2, 15W) for $63 (22 percent off, Prime exclusive): A 10K power bank like this is ideal if you want to be able to recharge your phone at least once fully and have extra power to spare. This one is also Qi2 compatible, providing up to 15W of power to supported phones.
Amazon Fire TV Cube for $100 (29 percent off): Amazon's most powerful streaming device, the Fire TV Cube supports 4K, HDR and Dolby Vision content, Dolby Atmos sound, Wi-Fi 6E and it has a built-in Ethernet port. It has the most internal storage of any Fire TV streaming device, plus it comes with an enhanced Alexa Voice Remote.
Rode Wireless Go III for $199 (30 percent off): A top pick in our guide to the best wireless microphones, the Wireless Go III records pro-grade sound and has handy extras like onboard storage, 32-bit float and universal compatibility with iPhones, Android, cameras and PCs.
Shark AI robot vacuum with self-empty base for $230 (58 percent off, Prime exclusive): A version of one of our favorite robot vacuums, this Shark machine has strong suction power and supports home mapping. The Shark mobile app lets you set cleaning schedules, and the self-empty base that it comes with will hold 30 days worth of dust and debris.
Levoit LVAC-300 cordless vacuum for $250 ($100 off, Prime exclusive): One of our favorite cordless vacuums, this Levoit machine has great handling, strong suction power for its price and a premium-feeling design. Its bin isn't too small, it has HEPA filtration and its battery life should be more than enough for you to clean your whole home many times over before it needs a recharge.
Shark Robot Vacuum and Mop Combo for $300 (57 percent off, Prime exclusive): If you're looking for an autonomous dirt-sucker that can also mop, this is a good option. It has a mopping pad and water reservoir built in, and it supports home mapping as well. Its self-emptying base can hold up to 60 days worth of debris, too.
Nintendo Switch 2 for $449: While not technically a discount, it's worth mentioning that the Switch 2 and the Mario Kart Switch 2 bundle are both available at Amazon now, no invitation required. Amazon only listed the new console for the first time in July after being left out of the initial pre-order/availability window in April. Once it became available, Amazon customers looking to buy the Switch 2 had to sign up to receive an invitation to do so. Now, that extra step has been removed and anyone can purchase the Switch 2 on Amazon.
This article originally appeared on Engadget at https://www.engadget.com/deals/the-best-october-prime-day-deals-you-can-get-today-early-sales-on-gear-from-apple-anker-jbl-shark-and-more-050801366.html?src=rss
The Roku Streaming Stick Plus is on sale for only $29 right now
If you're looking for a way to upgrade an old TV or add a more convenient smart interface to your main set, Roku devices are good ways to do that. Thanks to Prime Day deals that you can already get now, you can get one of our favorite Roku streaming devices for less than $30. The Roku Streaming Stick Plus is on sale for just $29 right now, which is 27 percent off and the lowest price we've seen.
We picked the Streaming Stick Plus as the best streaming device for free and live content, thanks in large part to The Roku Channel app that accompanies it. The Roku Channel features over 500 free TV channels with live news, sports coverage and a rotating lineup of TV shows and movies.
In our hands-on review of the Roku Streaming Stick Plus, we thought it was perfect for travel thanks to its small size and the fact that it can be powered by your TV's USB port, nixing the need for a wall adapter. Menu navigation and opening or closing apps won't happen at quite the same speeds as more expensive streamers, but it's quick enough for what is ultimately a pretty low-cost option. The Wi-Fi range on this one is also weaker than Roku's pricier devices, but unless you are placing it exceedingly far from your router, it shouldn't be an issue.
The Roku Streaming Stick Plus supports both HD and 4K TVs, as well as HDR10+ content. It doesn't support Dolby Vision, however; for that you'll need to upgrade to Roku's Streaming Stick 4K or Roku Ultra. It comes with Roku's rechargeable voice remote with push-to-talk voice controls. Roku's remote can also turn on your TV and adjust the volume while you're watching.
If you've been thinking about getting a Roku device, or you already love the platform and want a compact and convenient way to take it with you when you travel, then this sale provides a great opportunity.
This article originally appeared on Engadget at https://www.engadget.com/deals/the-roku-streaming-stick-plus-is-on-sale-for-only-29-right-now-134656999.html?src=rss
US labor board drops allegation that Apple's CEO violated employees' rights
The National Labor Relations Board has withdrawn "many of the claims" it made against Apple in relation to the cases brought in 2021 by former employees Ashley Gjøvik and Cher Scarlett, according to Bloomberg. In particular, it dismissed an allegation that Apple CEO Tim Cook violated workers' rights when he sent an all-staff email that year, which said "people who leak confidential information do not belong" in the company. Cook also said in the email that Apple was "doing everything in [its] power to identify those who leaked" information from an internal meeting the previous week, wherein management answered workers' questions about pay equity and Texas’ anti-abortion law.
Apple didn't “tolerate disclosures of confidential information, whether it’s product IP or the details of a confidential meeting," Cook wrote. Gjøvik and Scarlett accused Apple of prohibiting wage discussion and preventing staff from talking to reporters. After an investigation, NLRB previously came to the conclusion that Cook's email and Apple's overall behavior were "interfering with, restraining and coercing employees in the exercise of their rights."
In addition dropping its claim that Cook violated workers' rights, the labor board is also withdrawing its allegation that the firing of activist Janneke Parrish, one of the leaders of the #AppleToo movement, broke the law. It's dismissing its previous allegations that Apple broke the law by imposing confidentiality rules and surveilling workers or making them think they were under surveillance, as well.
Bloomberg says this is just one instance of the NLRB being more friendly to companies under President Trump. It's not quite clear if the labor board has withdrawn all allegations against Apple related to the complaint or just some of them, but we've reached out for clarification.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/us-labor-board-drops-allegation-that-apples-ceo-violated-employees-rights-143053792.html?src=rss
Shark robot vacuums are up to 58 percent off ahead of Prime Day
With fall Prime Day around the corner, we're already starting to see solid deals on tech we love. Case in point: Shark robot vacuums. Shark makes some of our favorite robovacs and a few of them are already discounted for Prime members ahead of the sale. The Shark AV2501S AI Ultra robot vacuum is one of them, with a whopping 58-percent discount that brings it down to $230. This discount marks a record low for this model.
Shark offers several variations of its AI Ultra robot vacuums. There are small variations between them, and a different model is our pick for the best robot vacuum for most people. In general, you can expect solid cleaning performance from these devices, along with accurate home mapping and an easy-to-use app.
The model that's on sale here is said to run for up to 120 minutes on a single charge, which should be enough to clean an entire floor in a typical home. The self-emptying, bagless vacuum can store up to 30 days worth of dirt and debris in its base. Shark says it can capture 99.97 percent of dust and allergens with the help of HEPA filtration.
If you'd rather plump for a model that's able to mop your floors too, you're in luck: a Shark Matrix Plus 2-in-1 vacuum is on sale as well. At $300 for Prime members, this vacuum is available for $400 (or 57 percent) off the list price. Its mopping function can scrub hard floors 100 times per minute. You can also trigger the Matrix Mop function in the app for a deeper clean. This delivers 50 percent better stain cleaning in targeted zones, according to Shark.
This article originally appeared on Engadget at https://www.engadget.com/deals/shark-robot-vacuums-are-up-to-58-percent-off-ahead-of-prime-day-171836084.html?src=rss
This Anker MagSafe power bank is down to a record low ahead of Prime Day
We can all be honest and say that carrying around a bulky power bank almost makes it seem like your phone dying isn't so bad. Between the heaviness and any necessary cords, they can just be a pain. So, we were intrigued when Anker debuted a new, very thin power bank this summer: the Anker Nano 5K MagGo Slim power bank.
Now, both Anker and Amazon are running sales on it, dropping the price from $55 to $46. The 16 percent discount a new low for the power bank and available in the black and white models. It's just about a third of an inch thick and attaches right to your iPhone. On that note, it works with any MagSafe compatible phone with a magnetic case.
Anker's Nano 5K MagGo Slim is our pick for best, well, slim MagSafe power bank. It took two and a half hours to charge an iPhone 15 from 5 percent to 90 percent. However, it could boost the battery to 40 percent in just under an hour. Overall, though, the minimalist design and easy to grip matte texture, really sold it to us.
Follow @EngadgetDeals on X for the latest tech deals and buying advice.
This article originally appeared on Engadget at https://www.engadget.com/deals/this-anker-magsafe-power-bank-is-down-to-a-record-low-ahead-of-prime-day-121512235.html?src=rss
EA reportedly plans to go private with help from Silver Lake and Saudi Arabia
Electronic Arts is close to reaching a $50 billion deal that will turn it into a privately held company, according to The Wall Street Journal. The video game company filed for an IPO way back in 1990 and has been public ever since, but now a group of investors are in talks with the company to take it private. Those investors reportedly include private equity firm Silver Lake, Saudi Arabia's Public Investment Fund (PIF) and Jared Kushner's Affinity Partners, whose largest source of funding is also Saudi's PIF.
It's worth noting that EA's shares are already tied to major financial organizations, even though it's publicly traded, with Saudi's PIF owning almost 10 percent of the company. As Reuters notes, analysts believe Saudi is interested in buying out EA due to its annual release of popular sports titles, including Madden and NHL, which makes for predictable earnings.
Saudi has made several major investments in the video gaming industry overall as part of its efforts to prepare for a post-oil economy. In addition to its investment in EA, it also purchased stakes in Take-Two Interactive, Activision Blizzard, Nintendo and the Embracer Group. In March, Pokémon Go maker Niantic sold its gaming division to a Saudi-owned company, as well. Unlike PIF and Kushner's Affinity Partners, Silver Lake doesn't have a huge stake in EA at the moment and doesn't have notable gaming investments other than its stake in Unity.
Bloomberg and The Financial Times report that the company could announce the buyout as soon as next week, but details could change since nothing has been finalized yet. If the $50 billion deal does push through, it'll become the biggest leveraged buyout of all time.
This article originally appeared on Engadget at https://www.engadget.com/gaming/ea-reportedly-plans-to-go-private-with-help-from-silver-lake-and-saudi-arabia-123011751.html?src=rss
How to set your PS5 as your home console
Setting your PlayStation 5 as your primary console ensures other users on that system can access your digital games and PlayStation Plus benefits. This includes offline access to your library and shared access for other local profiles on the same device.
This guide explains how to enable Console Sharing and Offline Play on your PS5, along with tips to manage your account and avoid common issues. After all, sharing is caring, and this can be a great way for your squad at home to experience a stack of games at no extra cost, while claiming all the trophies (and the glory) for their own profiles.
Console Sharing and Offline Play is the PS5 equivalent of designating a “primary” console. When enabled on your PlayStation 5, it provides the following perks:
Any local user on that console can play games from your library.
Any local user can access your PlayStation Plus subscription benefits, including online multiplayer and game catalog access.
Local users can also play your digital games without an internet connection.
This feature is tied to your PlayStation Network (PSN) account and can only be active on one PS5 console at a time. Therefore, if you sign into a new console and activate this setting, it will be disabled on the previous system.
To set your PS5 as your home console, you need:
An active PlayStation Network account
A stable internet connection is required to enable the feature initially
Physical access to the console where you want to activate Console Sharing
You should also confirm that your account is the one used to purchase digital games or subscribe to PlayStation Plus. This ensures shared access will work correctly.
Follow these instructions to enable Console Sharing and Offline Play:
Sign in with your main account
Turn on your PlayStation 5 and log in to the PSN account that owns the games and subscription.
Go to Settings
From the Home screen, select the gear icon in the top-right corner to open the Settings menu.
Select Users and Accounts
Scroll down and open the “Users and Accounts” section.
Navigate to Other
In the left sidebar, scroll to and select “Other.”
Open Console Sharing and Offline Play
Choose the option labeled “Console Sharing and Offline Play.” This section controls access to your games and services.
Enable the feature
If the feature is currently disabled, select “Enable.” You will see a confirmation message that this PS5 is now your active console for sharing and offline access.
Once enabled, other user profiles on the console will be able to launch your digital games and use PlayStation Plus features. You will also retain access to your library even when disconnected from the internet.
You can return to the same settings page to check if Console Sharing is active. If you see “Disable” as the available option, the feature is currently turned on for that console.
If you want to remove access or switch primary status to another console, select “Disable” to turn off Console Sharing on your current system. Then, repeat the setup steps on your new console.
There are a few restrictions you should keep in mind:
Only one active console: You can only enable Console Sharing and Offline Play on one PS5 per account. Activating it on a second system will automatically deactivate it on the first. For example, if you enable it on your console at home and then go to a friend’s place and activate it on their PS5, it will deactivate the feature on your home console.
No remote deactivation option: Unlike previous console generations, you cannot deactivate this feature from a web browser. You must do it manually from the console or activate a new one.
Game sharing applies locally only: Console Sharing only applies to users on the same PS5. It does not let others on different consoles access your content, even if they are signed into your account elsewhere.
Network issues may affect access: While offline play is supported, some Digital Rights Management-protected (DRM) content may occasionally require a revalidation online. DRM is a form of access control technology that limits the copying, distribution and use of digital media to authorized users. So with this in mind, it’s a good idea to launch games at least once while connected to the internet after downloading.
Sharing games with family
If multiple people use the same PS5, enabling Console Sharing allows each user to access the same game library without needing to purchase extra copies. Each person can have their own profile and saves while still playing the same titles, nipping any gaming-related arguments in the bud before they happen.
Playing offline
If your internet connection is unreliable or you plan to use your PS5 in a location without internet access, enabling this feature ensures your downloaded games and PS Plus benefits remain accessible.
Upgrading or replacing your console
If you purchase a new PS5 or switch devices, you will need to re-enable Console Sharing on the new console. However, make sure to disable this feature on the old system before selling or giving it away. If you forget to do this, it will remain active on your old console until you enable it on the new one.
If Console Sharing is turned off:
Only the account that purchased the games will be able to launch them.
Other users on the same console will be blocked from opening digital games tied to your account.
PlayStation Plus features like online play and game catalog access will not be shared.
You may lose access to your games when offline.
Enabling Console Sharing ensures uninterrupted access for all users and prevents unexpected restrictions, especially during outages or while gaming away from home. It only takes a few minutes to set up, but it can save you plenty of headaches.
Setting your PS5 as your home console by enabling Console Sharing and Offline Play ensures that you and other users on the same system can access your digital library and PlayStation Plus features. It’s a one-time process that helps avoid re-downloads, account switching, or unnecessary duplicate purchases. While only one PS5 can be linked at a time, switching is easy through the system settings.
As long as you remember it only applies to one console at a time, Console Sharing and Offline Play can make your PS5 experience smoother for both you and anyone else using it.
This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/how-to-set-your-ps5-as-your-home-console-120059735.html?src=rss
Hades 2, slot machine horror and other new indie games worth checking out
Welcome to our latest roundup of what's going on in the indie game space. It's been a packed week, with tons of new releases worth highlighting and Tokyo Game Show taking place.
Before we get started, make sure to check out our recap of Kojima Productions' 10th anniversary showcase if you need to catch up. I can’t quite get my head around how a literal walking sim from Hideo Kojima might work. Sony had a bunch of things to show off during its PlayStation State of Play this week, including a few tasty-looking indies like Chronoscript: The Endless End. So too did Xbox in its Tokyo Game Show stream — Double Dragon Revive looks neat, as does Rhythm Doctor.
Also, the developers and publishers of several of this week's arrivals delayed them to get some breathing space from Hollow Knight: Silksong... only to run right into Hades 2. That's extremely unfortunate. But the teams behind some newcomers — Baby Steps, CloverPit, Aethermancer, Star Birds and Deadly Days: Roadtrip — are doing something about that. They've teamed up for a special Steam sale and bundle of their games. Love to see indie developers supporting each other.
Hades 2 is finally out of early access on PC. The full game is now available on Nintendo Switch and Switch 2 as well.
Reviews have been pretty stellar for Supergiant’s sequel. I played a little of it in early access last year, but decided to hold off getting in too deep until the full version arrived. And, of course, I now have a ton of other games to play. I'll absolutely spend some time with Hades 2 eventually. But there's another roguelite that's soaking up a lot of my time right now...
I feel grimy when I'm playing CloverPit. I'm imprisoned in a tiny, rusty, metallic room that wouldn't look out of place in Silent Hill's Otherworld. I have a debt to pay and deadlines to meet, with some coins, lucky charms and a slot machine to help me reach my goals and hopefully escape. Failure means plunging into a dark abyss.
Whenever I haven't been playing EA Sports FC 26 in my free time, I willingly keep returning to this disgusting cell. I try desperately to find synergies between the lucky charms to break the slot machine and make sure I earn enough coins to resolve the arrears. Offers made by telephone, almost Deal or No Deal-style, can help while perhaps adding a greater risk of losing all my coins.
Panik Arcade has stressed that this is a horror game, not a gambling simulator. The whole idea is to bend the rules in your favor.
I haven't yet had a successful run. I did pretty well a few times with builds focused on cherries and diamonds, though deadline 11 has remained out of reach for me thus far. No spoilers here, but there's a big jump from the 10th deadline's debt level.
The game is incredibly sticky, and I can see myself sinking many, many more hours into CloverPit. (I won't be alone there. I just watched a video of someone who put 155 hours into the demo.)
CloverPit, which is published by Future Friends Games, is out now on Steam.
I had fun with the Baby Steps demo this summer, but after looking forward to this literal walking simulator for a couple of years, I realize that I'm more likely to watch a YouTube video of someone playing it than try to beat it myself. I’d probably do that on a treadmill so I can get my own steps in at the same time.
This is the latest game from Bennett Foddy (QWOP, Getting Over It), Gabe Cuzzillo and Maxi Boch, who previously made Ape Out together. It sees "an unemployed failson" being forced to get up off his rear end and make it to the peak of a mountain. To take Nate there, you'll need to pick up one foot and move it onto (hopefully) stable ground before moving his other leg, taking one clumsy step at a time to reach his destination.
Baby Steps is supposed to be as funny as it is frustrating. You will fall. A lot. Sometimes in a way that erases much of your progress. But as with working out, progress is the point. If only Nate would actually use his damn arms for stability as well. Then you might really start to see some results.
Baby Steps is out now on Steam and PS5.
I've had my eye on Bloodthief for a while. It's a vampiric, medieval take on fast-paced dungeon running in the vein of Ghostrunner with Ultrakill-style murdering. A solo developer who goes by Blargis is behind this game, which hit Steam this week.
Giving so much of my attention to CloverPit and don't-call-it-FIFA (and a few others we'll get to momentarily) means I haven't much time to check out Bloodthief yet. Still, I look forward to being as terrible at it as I am at Ghostrunner 2.
One of the highlights of Playdate Season 2 is Blippo+, a parody of cable TV. The FMV experience from Yacht, Telefantasy Studios, Noble Robot and publisher Panic has moved into the color TV age, as it's now available on Nintendo Switch and Steam.
As you channel surf the otherworldly broadcasts and observe the offbeat alien TV personalities doing their thing, you might start to piece together a deeper story that's playing out across the shows and news programs. Blippo+ is such a strange, wonderful thing. I'm glad it exists and that more people have the chance to enjoy it.
Consume Me is a coming-of-age life sim about a student who is entering her last year of high school and dealing with the stress and complexity of that painful time. For Jenny, that means managing chores (such as laundry and walking the dog), her studies, dates with her boyfriend and an eating disorder. Time management is a key factor, and you'll try to stay on top of everything by playing minigames.
Consume Me, which is based in part on co-developer Jenny Jiao Hsia's own experiences as a teenager, won the Seamus McNally Grand Prize at this year’s Independent Games Festival. AP Thomson, Jie En Lee, Violet W-P and Ken "coda" Snyder are the other developers of the game, which Hexecutable published. Consume Me is out now on Steam for PC and Mac.
Hotel Barcelona brought together two famed game directors, Swery (Hidetaka Suehiro), of Deadly Premonition fame and No More Heroes creator Suda51 (Goichi Suda). The latter came up with the concept for this game, which Swery announced all the way back in 2019. So the roguelite had been in the works for quite some time before it checked in to PC and consoles this week.
Here, you'll fight your way through a hotel that serial killers have overrun. You can rope in a couple of friends to help you thanks to multiplayer support. In the style of many FromSoftware titles, you'll also have the option to invade other players' games and play spoiler by taking them out and undoing their progress. That seems really mean, though. I don’t know why anyone would do that.
Hotel Barcelona, from Swery's White Owls Inc. and publisher Cult Games, is out now on Steam, Xbox Series X/S and PS5.
Annapurna Interactive is always a publisher worth paying attention to given its strong track record. This week, it revealed three upcoming adventure games during a showcase at Tokyo Game Show. I checked out demos for a couple of them, and I've already added all three to my wishlist.
D-topia is set in an apparent utopia run by artificial intelligence. You play as a maintenance worker who tries to keep things humming along by solving logic puzzles in the factory and helping out others with their problems. Your choices decide how the story plays out and, shock horror, things might not be going entirely smoothly behind the scenes.
I dig the very clean look here. It reminds me a bit of Mirror's Edge. The dialogue in the demo is fun too. Expect to see this narrative-driven puzzler from Marumittu Games land on Steam, Epic Games Store, Nintendo Switch, Switch 2, PS5, Xbox Series X/S and Windows PC via the Xbox App in 2026.
Also coming to Steam, Epic Games Store, PS5, Xbox Series X/S and Windows PC via the Xbox App next year is People of Note by Iridium Studios. This is billed as a "musical narrative adventure" that sees pop singer Cadence seeking stardom with the help of other musicians who specialize in other genres. You'll need to time your attacks to the beat to make them more effective, while genres play a role in making battles more dynamic.
Turn-based combat generally isn't my bag and I didn't enjoy it in this demo either. However, Iridium wants people to be able to play the game their way. People of Note will include the option to disable things like turn-based combat and environmental puzzles. That immediately makes the game more appealing to me, especially because I like what I've seen of the world, story and characters. The promise of "full-length cinematic musical sequences" sure sounds good to me too.
The third game Annapurna showed off is Demi and the Fractured Dream. I haven't had a chance to try the demo for this one as yet, but it looks like a Zelda-esque action adventure with environmental puzzles, platforming and plenty of hacking and slashing. As Demi, a cursed hero who is trying to save the world by slaying a trio of Accursed Beasts, you'll have a variety of tools and spells at your disposal. Time your dodges just right, and you'll power up your next set of attacks.
This game from Yarn Owl is coming to Steam, Epic Games Store, Nintendo Switch, Switch 2, PS5, Xbox Series X/S and Windows PC via the Xbox App in 2026.
This week's State of Play included a gameplay trailer for Halloween, from IllFonic and co-publisher Gun Interactive. We also got a release date for it. The horror game is coming to PlayStation, Xbox, Steam and Epic Games Store on September 8, 2026. Why it's not dropping in late October is beyond me.
This is an asymmetric multiplayer game in the vein of Friday the 13th: The Game (also from IllFonic and Gun) and The Texas Chain Saw Massacre, which Gun published. Three teammates will play as civilians who are trying to save the intended NPC victims of Jason Voorhees. If you'd rather go it alone, though, you can terrorize Haddonfield, Illinois as the legendary killer in a single-player mode.
This article originally appeared on Engadget at https://www.engadget.com/hades-2-slot-machine-horror-and-other-new-indie-games-worth-checking-out-110000884.html?src=rss
New Jersey Theme Park Puts Animatronic Dinosaurs on Facebook Marketplace as It Shuts Down
"Just be sure you’ve got a big backyard."
‘Magic: The Gathering’ Will Add More ‘Final Fantasy’ and PlayStation to Its Decks
Wizards of the Coast is ending 2025 with more 'Final Fantasy' cards for 'Magic: The Gathering' and a surprise PlayStation collaboration.
NASA Couldn’t Get Its Rover to the Moon, So Blue Origin Will Do It Instead
Jeff Bezos's Blue Origin will deliver VIPER to the lunar surface in 2027.
Feds Scrutinizing Potential Insider Trading in Major Crypto Deals
Wall Street regulators are reportedly flagging suspicious trading in the stocks of companies that have announced big crypto bets.
Reinventing SETI: Why Our Alien-Hunting Playbook Needs an Upgrade
In this excerpt from his new book, John Gertz argues it’s time to ditch SETI’s old dogmas and rethink how we prepare for first contact.
Why AI Won’t Replace Your Weather App (Yet)
AI can summarize, interpret, and enhance – but when it comes to delivering reliable, real-time weather data, your radar-based weather app is still irreplaceable. Here's why.
MySQL AI Introduced for Enterprise Edition
Oracle has recently announced MySQL AI, a new set of AI-powered capabilities available exclusively in the MySQL Enterprise edition, targeting analytics and AI workloads in large deployments. Concerns are rising throughout the MySQL community over the future of the popular Community edition, amid fears of vendor lock-in and following recent internal layoffs.
By Renato LosioUS To Revoke Colombian President's Visa Over 'Incendiary Actions'
The US State Department said it would revoke the visa of Colombia's leftist President Gustavo Petro, who returned to Bogota on Saturday after being accused of "incendiary actions" during a pro-Palestinian street protest in New York.
Protesters Demand Answers 11 Years After Mexican Students Vanished
Eleven years after her son vanished, Delfina de la Cruz vented frustration at the unsolved disappearances of 43 Mexican students who were allegedly kidnapped by drug traffickers while authorities turned a blind eye.
'Snapback': What Sanctions Will Be Reimposed On Iran?
A raft of UN sanctions on Iran over its nuclear program, lifted under a landmark 2015 deal, will go back into force at the end of Saturday -- barring a diplomatic breakthrough, thought to be unlikely.
UN Sanctions On Iran Set To Return As Nuclear Diplomacy Fades
Iran is set to come under sweeping UN sanctions late Saturday for the first time in a decade, barring an unexpected last-minute breakthrough, after nuclear talks with the West floundered.
King Charles III To Visit Vatican In October
King Charles III, head of the Church of England, and Queen Camilla will make a state visit to meet Pope Leo XIV for the first time at the Vatican next month, Buckingham Palace said Saturday.
Argentine Victims Of Live-streamed Murder Laid To Rest On Eve Of Protest
Shocked and in tears, relatives on Friday laid to rest two women and a girl whose live-streamed torture and murder caused an outcry in Argentina, where activists are planning a weekend protest against femicide.
Apple's iPhone 17 will forever change how we take selfies - including on Android phones
Look for other phone makers to take inspiration from the iPhone 17 and change their front-facing cameras in 2026.
The best Linux laptops in 2025: Expert tested for students, hobbyists, and pros
As Linux continues to grow as a popular alternative to Windows and macOS, we tested some of the best Linux-compatible laptops from brands like Lenovo and Dell. Here are our favorites.
My favorite smart bulbs add ambience to any home, and they're under $20 before October Prime Day
The GE Cync smart bulbs let you control the lighting and mood of your home. Right now, as an early Prime Day deal, you can get a two-pack for 20% off.
50 AI agents get their first annual performance review - 6 lessons learned
For one year, a McKinsey team observed these digital employees on the job. Here's their progress report.
AI magnifies your teams strengths - and weaknesses, Google report finds
AI magnifies how well (or poorly) you already operate. The 2025 DORA report reveals seven practices that separate high-performing teams from struggling ones.
The best cheap tech gifts you can buy under $25
ZDNET found the best cheap tech gifts -- and deals -- under $25, so you can save on gift-giving for everyone on your list.
Get a free iPhone 17 Pro (and cheaper Powerbeats Pro 2 earbuds) at AT&T - here's how
Save up to $1,100 on the iPhone 17 Pro with trade-in and get $50 off a pair of the Powerbeats Pro 2 earbuds at AT&T. Here's the details.
Your TV's USB port has hidden superpowers: 5 clever ways I use mine
It's actually surprisingly useful.
The best tech gifts you can buy under $100
You don't have to break the bank to gift tech. Check out these fun and affordable gadgets for $100 or less, plus how you can save on them during pre-holiday sales.
Can't upgrade to Windows 11? These are my 4 most powerful troubleshooting secrets
If a Windows upgrade has ever gone sideways on you, you know how vague and unhelpful the error messages can be. Here are my go-to troubleshooting tricks when that happens.
You should disable ACR on your TV right now - here's how and why
Your smart TV comes with privacy risks. But you can avoid them.
Why this portable hotspot may be my favorite travel gadget of the year
TravlFi's JourneyGo 4G Hotspot delivers a reliable wireless network for travelers away from Wi-Fi.
ChatGPT will let your team collaborate via 'shared projects' - and other work-friendly updates
OpenAI's chatbot can now automatically pull info from apps like Gmail and DropBox, among other perks. Here's who gets to try them first.
Apple's iPhone 17 will forever change how we take selfies - including on Android phones
Look for other phone makers to take inspiration from the iPhone 17 and change their front-facing cameras in 2026.
The best Linux laptops in 2025: Expert tested for students, hobbyists, and pros
As Linux continues to grow as a popular alternative to Windows and macOS, we tested some of the best Linux-compatible laptops from brands like Lenovo and Dell. Here are our favorites.
My favorite smart bulbs add ambience to any home, and they're under $20 before October Prime Day
The GE Cync smart bulbs let you control the lighting and mood of your home. Right now, as an early Prime Day deal, you can get a two-pack for 20% off.
50 AI agents get their first annual performance review - 6 lessons learned
For one year, a McKinsey team observed these digital employees on the job. Here's their progress report.
AI magnifies your teams strengths - and weaknesses, Google report finds
AI magnifies how well (or poorly) you already operate. The 2025 DORA report reveals seven practices that separate high-performing teams from struggling ones.
The best cheap tech gifts you can buy under $25
ZDNET found the best cheap tech gifts -- and deals -- under $25, so you can save on gift-giving for everyone on your list.
Get a free iPhone 17 Pro (and cheaper Powerbeats Pro 2 earbuds) at AT&T - here's how
Save up to $1,100 on the iPhone 17 Pro with trade-in and get $50 off a pair of the Powerbeats Pro 2 earbuds at AT&T. Here's the details.
Your TV's USB port has hidden superpowers: 5 clever ways I use mine
It's actually surprisingly useful.
The best tech gifts you can buy under $100
You don't have to break the bank to gift tech. Check out these fun and affordable gadgets for $100 or less, plus how you can save on them during pre-holiday sales.
Can't upgrade to Windows 11? These are my 4 most powerful troubleshooting secrets
If a Windows upgrade has ever gone sideways on you, you know how vague and unhelpful the error messages can be. Here are my go-to troubleshooting tricks when that happens.
You should disable ACR on your TV right now - here's how and why
Your smart TV comes with privacy risks. But you can avoid them.
Why this portable hotspot may be my favorite travel gadget of the year
TravlFi's JourneyGo 4G Hotspot delivers a reliable wireless network for travelers away from Wi-Fi.
ChatGPT will let your team collaborate via 'shared projects' - and other work-friendly updates
OpenAI's chatbot can now automatically pull info from apps like Gmail and DropBox, among other perks. Here's who gets to try them first.
How to Build an Intelligent AI Desktop Automation Agent with Natural Language Commands and Interactive Simulation?
In this tutorial, we walk through the process of building an advanced AI desktop automation agent that runs seamlessly in Google Colab. We design it to interpret natural language commands, simulate desktop tasks such as file operations, browser actions, and workflows, and provide interactive feedback through a virtual environment. By combining NLP, task execution, and […]
The post How to Build an Intelligent AI Desktop Automation Agent with Natural Language Commands and Interactive Simulation? appeared first on MarkTechPost.
Meet Qwen3Guard: The Qwen3-based Multilingual Safety Guardrail Models Built for Global, Real-Time AI Safety
Can safety keep up with real-time LLMs? Alibaba’s Qwen team thinks so, and it just shipped Qwen3Guard—a multilingual guardrail model family built to moderate prompts and streaming responses in-real-time. Qwen3Guard comes in two variants: Qwen3Guard-Gen (a generative classifier that reads full prompt/response context) and Qwen3Guard-Stream (a token-level classifier that moderates as text is generated). Both […]
The post Meet Qwen3Guard: The Qwen3-based Multilingual Safety Guardrail Models Built for Global, Real-Time AI Safety appeared first on MarkTechPost.
How to fix the web, according to the man who invented it
Tim Berners-Lee invented the World Wide Web, revolutionising modern life. But it isn't without its dark side...
Unexpected Critics of Trump’s Attacks on Wind Energy: Oil Executives
Business leaders and trade organizations have been especially worried by attempts to stop work on wind farms that had already secured federal approval.
Sigma 35mm f/1.2 DG Art II Review: Compact, Lightweight, and Ultra-Fast
35mm has long been one of my favorite focal lengths, so it’s odd that I never had the chance to use the original Sigma 35mm f/1.2 DG Art as it would be right in my wheelhouse. Fortunately, when Sigma launched the second version of this optic in both E-Mount and L-Mount, I had a chance to make up for this omission.
Is ‘Beauty’ in Photography Out of Step with the ‘Art’ of Photography?
Is our pursuit of exquisite photos inconsistent with what is expected in art? Or is beauty something we should continue to strive for?
Git Pushups
Do pushups or we block your code
<p>
<a href="https://www.producthunt.com/products/git-pushups?utm_campaign=producthunt-atom-posts-feed&utm_medium=rss-feed&utm_source=producthunt-atom-posts-feed">Discussion</a>
|
<a href="https://www.producthunt.com/r/p/1020519?app_id=339">Link</a>
</p>
Lab-Grown Organoids Could Transform Female Reproductive Medicine
Artificial tissues that mimic the placenta, endometrium, ovary and vagina could point to treatments for common conditions such as preeclampsia and endometriosis
Quoting Dan Abramov
Conceptually, Mastodon is a bunch of copies of the same webapp emailing each other. There is no realtime global aggregation across the network so it can only offer a fragmented user experience. While some people might like it, it can't directly compete with closed social products because it doesn't have a full view of the network like they do.
The goal of atproto is enable real competition with closed social products for a broader set of products (e.g. Tangled is like GitHub on atproto, Leaflet is like Medium on atproto, and so on). Because it enables global aggregation, every atproto app has a consistent state of the world. There's no notion of "being on a different instance" and only seeing half the replies, or half the like counts, or other fragmentation artifacts as you have in Mastodon.
I don't think they're really comparable in scope, ambition, or performance characteristics.
— Dan Abramov, Hacker News comment discussing his Open Social article
<p>Tags: <a href="https://simonwillison.net/tags/mastodon">mastodon</a>, <a href="https://simonwillison.net/tags/bluesky">bluesky</a>, <a href="https://simonwillison.net/tags/dan-abramov">dan-abramov</a></p>
Trump demands Microsoft fire global affairs head Lisa Monaco
“It is my opinion that Microsoft should immediately terminate the employment of Lisa Monaco,” Trump said of a former Biden administration official.
How South Korea plans to best OpenAI, Google, others with homegrown AI
South Korea has launched its most ambitious sovereign AI initiative yet, as the nation's major tech players like LG and SK Telecom develop their own LLMs.
Famed roboticist says humanoid robot bubble is doomed to burst
Brooks, who co-founded iRobot and spent decades at MIT, is particularly skeptical of companies like Tesla and Figure trying to teach robots dexterity by showing them videos of humans doing tasks. In a new essay, he calls this approach "pure fantasy thinking."
Leaked a16z decks: $25B in net returns since its 2009 founding, including $11.2B in 2021, and 56 unicorn investments in the past 10 years, the most of any firm (Eric Newcomer/Newcomer)
Eric Newcomer / Newcomer:
Leaked a16z decks: $25B in net returns since its 2009 founding, including $11.2B in 2021, and 56 unicorn investments in the past 10 years, the most of any firm — The firm has held its $900 million 2012 Fund III at 9.4x net TVPI. — ∙ Paid — Andreessen Horowitz is perhaps …
Cloudflare combines a new Email Sending feature with Routing into a unified Email Service to let developers send emails from Cloudflare Workers, in private beta (Cloudflare)
Cloudflare:
Cloudflare combines a new Email Sending feature with Routing into a unified Email Service to let developers send emails from Cloudflare Workers, in private beta — If you are building an application, you rely on email to communicate with your users. You validate their signup …
Oracle, whose expenses are rising as it begins to fulfill massive cloud infrastructure deals, turned cash flow negative this year for the first time since 1992 (Brody Ford/Bloomberg)
Brody Ford / Bloomberg:
Oracle, whose expenses are rising as it begins to fulfill massive cloud infrastructure deals, turned cash flow negative this year for the first time since 1992 — Welcome to Tech In Depth, our daily newsletter about the business of tech from Bloomberg's journalists around the world.
Why YouTube is key to Google's success in AI, with YouTube's AI enhancements making video content more monetizable than text-based content in Search (Ben Thompson/Stratechery)
Ben Thompson / Stratechery:
Why YouTube is key to Google's success in AI, with YouTube's AI enhancements making video content more monetizable than text-based content in Search — Action is happening up-and-down the LLM stack: Nvidia is making deals with Intel, OpenAI is making deals with Oracle, and Nvidia and OpenAI are making deals with each other.
Sources: Apple is ramping up efforts to source manufacturing machinery in India, essential for making iPhones locally, and is working with around 17 suppliers (Danish Khan/Moneycontrol)
Danish Khan / Moneycontrol:
Sources: Apple is ramping up efforts to source manufacturing machinery in India, essential for making iPhones locally, and is working with around 17 suppliers — Around 17 companies have directly started working with Apple in India for capital equipment and tools over the last 20-24 months and the number is only expected to go up
An interview with California state Senator Scott Wiener on his new AI safety bill SB 53, the bill's scope, his focus on AI safety bills, AI PACs, and more (Maxwell Zeff/TechCrunch)
Maxwell Zeff / TechCrunch:
An interview with California state Senator Scott Wiener on his new AI safety bill SB 53, the bill's scope, his focus on AI safety bills, AI PACs, and more — This is not California state senator Scott Wiener's first attempt at addressing the dangers of AI.
A look at some California tech regulation bills, including one banning AI use in firing or disciplining workers, that await Gov. Newsom's signature or his veto (Brian Merchant/Blood in the Machine)
Brian Merchant / Blood in the Machine:
A look at some California tech regulation bills, including one banning AI use in firing or disciplining workers, that await Gov. Newsom's signature or his veto — A guide to the AI and tech bills that have passed the California legislature, and await the governor's signature — or veto.
China's central bank opens a digital yuan hub in Shanghai; a Chinese state media report says the hub will oversee cross-border payments and blockchain tech (Foster Wong/Bloomberg)
Foster Wong / Bloomberg:
China's central bank opens a digital yuan hub in Shanghai; a Chinese state media report says the hub will oversee cross-border payments and blockchain tech — China's central bank has opened a digital yuan operations center in Shanghai featuring platforms for cross-border payments …
At an all-hands, AWS CEO Matt Garman criticized staff for slow product rollouts and demonstrated a new agentic AI product for internal testing called Quick (Greg Bensinger/Reuters)
Greg Bensinger / Reuters:
At an all-hands, AWS CEO Matt Garman criticized staff for slow product rollouts and demonstrated a new agentic AI product for internal testing called Quick — - AWS executive criticizes slow product rollouts at Reinvent event — Amazon grapples with AI competition and market perception
Google agrees to guarantee $1.4B of AI computing startup Fluidstack's $3B, 10-year agreement with Cipher Mining and gets the right to buy a 5.4% stake in Cipher (Bloomberg)
Bloomberg:
Google agrees to guarantee $1.4B of AI computing startup Fluidstack's $3B, 10-year agreement with Cipher Mining and gets the right to buy a 5.4% stake in Cipher — Alphabet Inc.'s Google has agreed to anchor a $3 billion data center contract with Cipher Mining Inc. in the latest tie up between …
eBay agrees to acquire Tise, a social marketplace for secondhand fashion and interior design items that has raised $45M in funding, for an undisclosed sum (Aisha Malik/TechCrunch)
Aisha Malik / TechCrunch:
eBay agrees to acquire Tise, a social marketplace for secondhand fashion and interior design items that has raised $45M in funding, for an undisclosed sum — eBay announced on Monday that it's acquiring Tise, a social marketplace for second-hand fashion and interior design items.
Swiss microfluidic chip-cooling tech startup Corintis raised a $24M Series A, a source says at a $400M valuation, and announces adding Lip-Bu Tan to its board (Reuters)
Reuters:
Swiss microfluidic chip-cooling tech startup Corintis raised a $24M Series A, a source says at a $400M valuation, and announces adding Lip-Bu Tan to its board — - Corintis valued at around $400 million after Series A round - source — Corintis plans to expand team, scale up manufacturing and open U.S. offices
Salt Lake City-based Filevine, which offers tools to help manage legal workflows, raised $400M from Insight Partners, Accel, Halo Fund, and others in two rounds (Maria Deutscher/SiliconANGLE)
Maria Deutscher / SiliconANGLE:
Salt Lake City-based Filevine, which offers tools to help manage legal workflows, raised $400M from Insight Partners, Accel, Halo Fund, and others in two rounds — Filevine Inc., the developer of a software platform that helps attorneys create and manage legal documents, has raised $400 million in funding.
Q&A with reinforcement learning pioneer Richard Sutton on why LLMs are not the path to achieving human intelligence, world models, continual learning, and more (Dwarkesh Patel/Dwarkesh Podcast)
Dwarkesh Patel / Dwarkesh Podcast:
Q&A with reinforcement learning pioneer Richard Sutton on why LLMs are not the path to achieving human intelligence, world models, continual learning, and more — Richard Sutton is the father of reinforcement learning, winner of the 2024 Turing Award, and author of The Bitter Lesson.
A new Digital ID will soon be required to work in the UK
The British government has confirmed plans to introduce a nationwide digital identification system, a move Prime Minister Keir Starmer described as central to efforts to curb illegal migration and modernize public services.
Read Entire Article
Video game giant EA in talks to go private in blockbuster $50 billion buyout
Sources told The Wall Street Journal that Electronic Arts is in advanced talks to go private through a $50 billion leveraged buyout. If finalized, the deal – expected to be announced as soon as next week – would be the largest transaction of its kind on record.
Read Entire Article
OpenAI says top AI models are reaching expert territory on real-world knowledge work
GDPval sets a new standard for benchmarking AI on real-world knowledge work, with 1,320 tasks spanning 44 professions, all reviewed by industry experts.
The article OpenAI says top AI models are reaching expert territory on real-world knowledge work appeared first on THE DECODER.
Physicist David Deutsch argues that true general intelligence starts with having your own story
Physicist David Deutsch argues that you can't test for AGI the way you test a piece of software.
The article Physicist David Deutsch argues that true general intelligence starts with having your own story appeared first on THE DECODER.
Apple introduces Manzano, a model for both image understanding and generation
Apple is working on Manzano, a new image model designed to handle both image understanding and image generation.
The article Apple introduces Manzano, a model for both image understanding and generation appeared first on THE DECODER.
Anthropic settles landmark AI copyright lawsuit for at least $1.5 billion
A landmark settlement between Anthropic and US authors and publishers could set new ground rules for how AI companies use copyrighted books to train their models.
The article Anthropic settles landmark AI copyright lawsuit for at least $1.5 billion appeared first on THE DECODER.
Microsoft's VibeVoice is a new AI podcast model that might generate spontaneous singing
Microsoft's VibeVoice system can generate up to 90 minutes of conversation involving as many as four speakers.
The article Microsoft's VibeVoice is a new AI podcast model that might generate spontaneous singing appeared first on THE DECODER.
Google updates Gemini 2.5 Flash models to deliver faster responses and improved performance
Google has released new preview versions of its lightweight Gemini 2.5 Flash and Flash Lite models. Both are still in the experimental phase but now offer faster response times, handle multimedia more efficiently, and can tackle more complex tasks.
The article Google updates Gemini 2.5 Flash models to deliver faster responses and improved performance appeared first on THE DECODER.
Homebrew Project Lead Brings Data to Ruby Central’s Debate
If you’re following the Ruby dispute, you might want to check out a recent post by Mike McQuaid that drills
The post Homebrew Project Lead Brings Data to Ruby Central’s Debate appeared first on The New Stack.
First Look at Verdent, an Autonomous Coding Agent From China
I got onto the recent early access beta for Verdent, the new AI coding tool from TikTok’s former head of
The post First Look at Verdent, an Autonomous Coding Agent From China appeared first on The New Stack.
Introduction to Observability
What Is Observability? Observability is not a process but a concept. While its potential remains largely latent, its utility in
The post Introduction to Observability appeared first on The New Stack.
Alibaba unveils $53B global AI plan – but it will need GPUs to back it up
Chinese giant maps out datacenters across Europe and beyond, yet US chip curbs cast a long shadow
Analysis Alibaba this week opened an AI war chest containing tens of billions of dollars, a revamped LLM lineup, and plans for AI datacenters in Europe. But it also prompted a flurry of questions over how it will achieve all this in an increasingly fragmented IT landscape, when critical resources are in short supply.…
The strangest game of the year is a channel-surfing simulator
It's not quite October yet, and there are still plenty of video games set to be released before the end of the year. Even still, I'm pretty convinced that Blippo Plus will go down as the strangest release of 2025. Calling it a game might be a bit of a misnomer; it's more of an […]
Can Google be trusted without a break up?
On day three of the two-week remedies trial in the Justice Department's ad tech case against Google, Judge Leonie Brinkema boiled down the argument to one key issue: trust. Brinkema interrupted testimony from a DOJ expert with a hypothetical: should she issue a strict order modifying Google's behavior, could it resolve the issues at hand […]
When this EV maker collapsed, its customers became the car company
Cristian Fleming paid around $70,000 for his dream car, a Fisker Ocean. He was drawn to the new EV's 350-mile range, eco-friendly image, and quirky features like "California Modes," which rolls down nearly every window at once. "I've always bought my cars because I love the way they look," Fleming says. "That's probably my first […]
What to expect from Amazon’s big fall hardware event on Tuesday
Amazon is hosting its 2025 fall hardware event on Tuesday, September 30th, and it could be a packed show. The company’s invite has a few not-so-subtle hints about new Echos and a new Kindle. It will also be Amazon’s next big product event for Panos Panay, who joined Amazon in 2023 to head up its […]
Raleigh One e-bike review: redemption tour
Two good things have come from the 2023 bankruptcy of VanMoof. The first is the all-new VanMoof S6 e-bike that recently launched under new ownership. The second is a new commuter e-bike developed for Raleigh by VanMoof's departed founders, Ties and Taco Carlier. Like a VanMoof, the Raleigh One e-bike comes with anti-theft features like […]
3 Reasons Why Not to Use AI in Your Product
AI is just a hype; sometimes it can backfire, or not necessary at all to begin with.
Why Your AI Is Slow on Windows — And How Windows ML Fixes It
Stop wasting GPU power — make on-device AI actually fast.
Why Your AI Is Slow on Windows — And How Windows ML Fixes It

Turn consumer Windows machines into real AI devices — GPU acceleration, offline inference, lower cost, and better privacy with Windows ML + DirectML.
Ever felt that pang of frustration when your beautifully crafted machine learning model, a marvel of computational prowess on your beefy development rig, crawls to a snail’s pace on a user’s everyday Windows machine? You’re not alone. I’ve been there, staring at a progress bar that seemed to mock my ambition, wondering why the future of AI felt so… sluggish. We spend countless hours optimizing models, tweaking hyperparameters, and then, poof, it hits a wall called “deployment on consumer hardware.”
The problem? Often, it’s not the model itself, but how we’re asking Windows to run it. We’ve been relying on generic CPU inference or cumbersome external runtimes that don’t truly leverage the incredible power packed into modern Windows devices. Imagine having a Ferrari in your garage but only driving it in first gear. That’s essentially what many AI applications on Windows have been doing.
But what if there was a way to unleash that raw, untapped potential? What if your AI applications could run with native, GPU-accelerated speed directly on Windows devices, providing a seamless, real-time experience for your users? Enter Windows ML Architecture, Microsoft’s answer to this pervasive performance bottleneck. This isn’t just another library; it’s a paradigm shift for on-device AI, aimed at making your models sing on every Windows machine.
What Exactly Is Windows ML, and Why Should You Care?
At its core, Windows ML (Windows Machine Learning) is an API that allows developers to run trained machine learning models directly on Windows devices. Think of it as a specialized translator and orchestrator for your AI models, enabling them to speak the language of Windows hardware, especially the GPU. Instead of your model struggling through generic CPU operations, Windows ML ensures it can tap into the dedicated AI acceleration capabilities that are increasingly becoming standard in modern hardware.
This isn’t just about speed; it’s about empowerment. For AI engineers and tech enthusiasts, Windows ML means:
- Blazing-Fast Inference: Leveraging DirectX 12-compatible GPUs (via DirectML, which we’ll dive into next) for significantly faster model execution. No more waiting for complex operations to complete.
- Offline Capabilities: Your AI can work even without an internet connection, crucial for privacy-sensitive applications or environments with unreliable connectivity.
- Reduced Latency: Processing data locally eliminates network roundtrips, leading to real-time responsiveness.
- Enhanced User Privacy: Data stays on the user’s device, reducing the need to send sensitive information to the cloud.
- Lower Cloud Costs: Offloading inference from cloud servers can dramatically cut operational expenses for high-volume applications.
So, how does it achieve this magic? The secret sauce lies within a component called DirectML.
DirectML: The GPU Whisperer for Your AI Models
If Windows ML is the conductor, then DirectML is the star soloist, responsible for the high-performance execution of your machine learning models on DirectX 12-compatible hardware.
What is DirectML?
DirectML is a low-level, high-performance API for machine learning. It’s part of the DirectX family (yes, the same family that powers stunning graphics in games!), which means it’s deeply integrated with the graphics stack of Windows. This integration allows DirectML to optimize model execution directly on the GPU, maximizing throughput and minimizing latency. It’s essentially a specialized hardware abstraction layer that translates common ML operations into highly optimized GPU instructions.
Here’s a simplified breakdown:
- Model Format: You typically train your models in popular frameworks like PyTorch, TensorFlow, or ONNX Runtime. For Windows ML, models are consumed in the ONNX (Open Neural Network Exchange) format. ONNX is an open standard that allows interoperability between different ML frameworks, acting as a universal language for models.

- Windows ML Runtime: This is the brain of the operation. When you load an ONNX model into your Windows application using the Windows ML APIs, the runtime takes over. It analyzes the model’s computational graph and determines the most efficient way to execute each operation.
- DirectML Backend: If a DirectX 12-compatible GPU is available, the Windows ML runtime hands off the heavy lifting to DirectML. DirectML then optimizes these operations for the specific GPU architecture, utilizing its parallel processing capabilities for incredible speed. If a GPU isn’t available, or for certain operations, it gracefully falls back to optimized CPU execution.
This deep integration with the Windows graphics stack means DirectML can often outperform generic CPU-based inference and even other GPU acceleration solutions that aren’t as tightly coupled with the OS. It’s built from the ground up to take advantage of the hardware already present in your users’ machines.
How Does This Supercharge Developer Efficiency?
I hear you, “Another framework to learn?” But here’s the kicker: Windows ML is designed with developer ease in mind. It significantly streamlines the process of deploying AI models on Windows devices.
- Simplified API: The Windows ML API is straightforward. You load your ONNX model, create an input, bind it, and evaluate. No complex low-level GPU programming required. You can integrate it directly into your UWP, Win32, or .NET applications.
- Tooling Integration: Microsoft provides tools to help you convert your models to ONNX. For instance, the WinMLTools library for Python makes it easy to convert models from TensorFlow, Keras, or PyTorch to ONNX format.
- Automatic Hardware Acceleration: You don’t need to manually configure GPU usage. Windows ML intelligently detects and utilizes available hardware acceleration (via DirectML) if present, or falls back to CPU if not. This “set it and forget it” approach saves immense development time and ensures broad compatibility.
- Cross-Platform Model Compatibility: Since it uses ONNX, you can train your model in any framework you prefer and then convert it for deployment on Windows, maintaining flexibility in your development stack.
- Debugging & Profiling: Tools like the Windows Machine Learning Dashboard and integration with Visual Studio provide insights into model performance and resource utilization, helping you identify bottlenecks and optimize your models further.
Imagine this: you’ve trained a brilliant object detection model. With Windows ML, you can integrate it into a desktop application, allowing users to analyze images in real-time, right on their machine, without ever touching the cloud. This opens up entirely new possibilities for privacy-centric AI, creative tools, and industrial applications.
Want to dive into the code yourself? The official GitHub repository is an excellent starting point:
https://github.com/microsoft/Windows-Machine-Learning
Here, you’ll find samples, documentation, and the tools to get started.
Real-World Wins: Where Windows ML Shines
The impact of Windows ML and DirectML extends across various domains:
- Creative Applications: Think real-time style transfer in photo editors, intelligent upscaling for video, or smart background blurring during video calls. These features demand low latency and high throughput, precisely what Windows ML delivers.
- Gaming: DirectML is a game-changer for gaming developers, enabling AI-powered super-resolution (like DLSS and FSR) and intelligent NPC behavior directly on the GPU, enhancing immersion and performance.
- Enterprise & Productivity: Smart document analysis, real-time transcription, or even predictive maintenance on industrial machines can all benefit from local, accelerated AI inference.
- Education & Research: Students and researchers can easily deploy and experiment with complex models on standard Windows hardware, democratizing access to high-performance AI.
The Road Ahead: Limitations and Open Questions
While Windows ML offers a significant leap forward, it’s essential to maintain a transparent, critical perspective.
- Model Compatibility: While ONNX is widely supported, not every esoteric layer or custom operation from all frameworks might translate perfectly without some manual tweaking or custom ONNX operator implementation.
- Hardware Dependency: The full performance benefits are realized on DirectX 12-compatible GPUs with good DirectML driver support. Older or very low-end hardware might not see the same dramatic improvements, though it still falls back to optimized CPU.
- Debugging Complex Issues: While the tooling is good, diagnosing deeply technical performance issues that span the ML model, ONNX runtime, DirectML, and GPU drivers can still be challenging for very niche scenarios.
- Evolving Ecosystem: The ML landscape is constantly evolving. Staying up-to-date with the latest ONNX operator sets and DirectML enhancements requires continuous engagement with the ecosystem.
These are not necessarily roadblocks but rather areas where the community and Microsoft continue to refine and improve the experience. The open questions often revolve around further expanding hardware compatibility, enhancing debugging tools for complex models, and integrating even more seamlessly with cloud-trained models.
FAQ
Q1: What is Windows ML?
Ans: Windows ML is a Windows API/runtime that runs ONNX models locally on Windows devices, leveraging DirectML for GPU acceleration.
Q2: How do I run my model on Windows ML?
Ans: Convert to ONNX, load with the Windows ML API in your app, and let Windows ML auto-select DirectML for GPU execution. (Link to GitHub sample.)
Q3: Will older PCs benefit?
Ans: Newer DirectX-12 GPUs get the biggest gains; older machines fall back to optimized CPU paths — still better than generic runtimes in many cases.
Conclusion: Your AI Deserves a Native Home on Windows
If you’re an AI engineer or tech enthusiast building applications for Windows, ignoring Windows ML and DirectML is a kin to leaving performance on the table. We’ve seen how this architecture directly addresses the pervasive problem of sluggish on-device AI inference, transforming it into a fluid, efficient experience. It empowers us to build more private, responsive, and cost-effective AI applications that truly harness the power of consumer hardware.
The future of AI is not just in the cloud; it’s increasingly at the edge, on the devices users interact with every day. Windows ML provides a robust, performant, and developer-friendly pathway to make that future a reality. So, next time you’re deploying an AI model to a Windows user, remember the silent killer of performance and equip your application with the native power it deserves.
What are your experiences with on-device AI deployment on Windows? Have you explored Windows ML, or are you facing similar challenges? Share your thoughts and insights in the comments below!
Acknowledgements
This exploration was inspired by the excellent documentation and ongoing work by Microsoft on Windows ML and DirectML. Special thanks to the teams behind the DirectX and Windows AI initiatives for empowering developers. Diagrams (hypothetically created) would leverage tools like Canva or draw.io, drawing inspiration from technical whitepapers on DirectX and machine learning.
Why Your AI Is Slow on Windows — And How Windows ML Fixes It was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Mojo Part 2: Unlocking the Real Power of Mojo for AI/ML Development
The Sigmoid Function: Foundation of Neural Networks
The Sigmoid Function: Foundation of Neural Network
Series: Foundation of AI — Blog 1

Every modern neural network stands on mathematical pillars.
One of the most important is the sigmoid activation function.
It’s not just a formula; it’s the bridge between linear math and nonlinear learning.
What is the Sigmoid?
Defined as:
σ(z) = 1 / (1 + e⁻ᶻ)
Takes any real number and compresses it into a value between 0 and 1. Think of it as a soft decision-maker: instead of “True/False”, it says “how likely is True?”.
Why Sigmoid Matters
Before sigmoid, models could only perform linear separation. With sigmoid, neurons could model probabilities and learn complex curves. It gave neural networks their first real ability to handle classification.
The sigmoid’s ability to output values between 0 and 1 makes it ideal for:
- Probability estimation — interpreting outputs as likelihoods
- Binary classification — distinguishing between two classes
- Gradient-based learning — enabling smooth weight updates
Before functions like sigmoid, neural networks could only handle linear separation problems.
Derivative: The Learning Engine
The mathematical elegance lies in how the sigmoid changes during training. Its derivative is simple yet profound:
dσ/dz = σ(z) ⋅ (1 − σ(z))
This compact formula allows gradients to flow backward, enabling backpropagation. Without it, the concept of deep learning would have remained a theory.
How We Derive This
Step 1: Start with the function
σ(z) = (1 + e⁻ᶻ)⁻¹
Step 2: Apply the Chain Rule
dσ/dz = -1 ⋅ (1 + e⁻ᶻ)⁻² ⋅ d/dz(1 + e⁻ᶻ)
Step 3: Differentiate the Inner Function
d/dz(1 + e⁻ᶻ) = -e⁻ᶻ
Step 4: Combine the Results
dσ/dz = -1 ⋅ (1 + e⁻ᶻ)⁻² ⋅ (-e⁻ᶻ) = e⁻ᶻ / (1 + e⁻ᶻ)²
Step 5: Express in Terms of σ(z)
Notice that:
- σ(z) = 1 / (1 + e⁻ᶻ)
- 1 — σ(z) = e⁻ᶻ / (1 + e⁻ᶻ)
Multiplying them gives:
σ(z) ⋅ (1 — σ(z)) = [1 / (1 + e⁻ᶻ)] ⋅ [e⁻ᶻ / (1 + e⁻ᶻ)] = e⁻ᶻ / (1 + e⁻ᶻ)²
Final Result: dσ/dz = σ(z) ⋅ (1 — σ(z))
Why This Matters for Learning
This derivative is computationally efficient because it reuses the neuron’s current output. During backpropagation, it determines how much each weight should change, making neural network training practical and efficient.
The sigmoid function demonstrated that neural networks could learn from data through mathematical optimization, paving the way for modern deep learning.
Next in series: The limitations of sigmoid and the evolution to modern activation functions.
The Sigmoid Function: Foundation of Neural Networks was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
DoodlAI- Build a Real-Time Doodle Recognition AI with CNN
Have you ever wondered if a computer could recognize your doodles of cats, trees, cars, or even clocks, as you draw them? That’s exactly what DoodlAI does. In this blog, I’ll take you step by step through building DoodlAI, a web application that uses deep learning to recognize hand-drawn sketches in real-time.

What is DoodlAI?
DoodlAI is an interactive platform where users can draw sketches, and the AI predicts the category of the drawing instantly. The system uses a Convolutional Neural Network (CNN) which is a type of deep learning model trained on a dataset of doodles to recognize drawings like,
- Animals: cat, dog
- Objects: car, house, clock
- Fruits: apple, banana
- Nature: tree
The AI then predicts the drawing in real-time, making it a fun tool to explore deep learning in action.

Explore the Project on GitHub
If you want to see the full code, download the dataset, or try running the project yourself, check out the DoodlAI repository on GitHub:
https://github.com/Abinaya-Subramaniam/DoodlAI
The repository includes:
- The complete CNN model code
- Data preprocessing scripts
- Training and evaluation notebooks
- Instructions to run the project locally or in Google Colab
Feel free to clone the repository, experiment with the model, or even contribute improvements!
Now, we’ll go through the steps to build the project
What is CNN?
A Convolutional Neural Network (CNN) is a type of deep learning model designed to process images. Unlike traditional neural networks, CNNs can automatically detect patterns like edges, shapes, and textures without us manually extracting features.
Key components of a CNN:
- Convolutional Layers: Apply filters to detect features in images, such as edges or curves.
- Pooling Layers: Reduce the size of the image while retaining important information, which helps the network learn efficiently.
- Activation Functions (ReLU): Introduce non-linearity so the network can model complex patterns.
- Dropout Layers: Randomly disable some neurons during training to prevent overfitting.
- Fully Connected Layers: Combine extracted features to classify the image into a category.
In short, CNNs mimic how the human visual system works, starting from simple lines and edges, they build up to complex shapes like a cat’s face or a tree.
Step 1: Setting Up the Environment
Before we build the model, we need to install some libraries. These libraries will help us:
- TensorFlow/Keras: Build and train neural networks
- NumPy: Handle large arrays of data
- Matplotlib: Visualize images and graphs
- OpenCV/Pillow: Work with images
- Flask-Ngrok: Run web applications from Colab
!pip install tensorflow keras numpy matplotlib opencv-python pillow flask-ngrok
Step 2: Understanding the Data
We need a dataset of doodles to teach our AI. We use the Google QuickDraw dataset, which contains hundreds of thousands of doodles drawn by people around the world.
We focus on 8 categories:
CATEGORIES = ['cat', 'dog', 'house', 'tree', 'car', 'apple', 'banana', 'clock']
Each category contains 28x28 pixel grayscale images, which are tiny black-and-white images perfect for training our model.
Step 3: Downloading and Preprocessing the Data
We need to:
- Download the doodle files from Google QuickDraw.
- Convert them into arrays the AI can understand.
- Normalize the data so values range between 0 and 1.
Here’s what the code does:
def download_quickdraw_data():
base_url = "https://storage.googleapis.com/quickdraw_dataset/full/numpy_bitmap/"
data_dir = 'quickdraw_data'
os.makedirs(data_dir, exist_ok=True)
X, y = [], []
for i, category in enumerate(CATEGORIES):
filename = f"{category.replace(' ', '%20')}.npy"
filepath = os.path.join(data_dir, filename)
if not os.path.exists(filepath):
response = requests.get(base_url + filename, stream=True)
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
category_data = np.load(filepath)[:10000]
category_data = category_data.reshape(-1, 28, 28, 1).astype('float32') / 255.0
X.append(category_data)
y.append(np.full(len(category_data), i))
X = np.vstack(X)
y = np.hstack(y)
y = to_categorical(y, num_classes=len(CATEGORIES))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.1, random_state=42, stratify=y_train)
return X_train, X_val, X_test, y_train, y_val, y_test
X_train, X_val, X_test, y_train, y_val, y_test, ATEGORIES = download_quickdraw_data()
The download_quickdraw_data() function handles the entire process of preparing the Google QuickDraw dataset for training a Convolutional Neural Network (CNN).
First, it downloads doodle files for each category (cat, dog, tree, etc.) if they are not already stored locally. Each doodle image is loaded, limited to 10,000 samples per category, reshaped to a 28x28 pixel grayscale format with a single channel, and normalized so that pixel values fall between 0 and 1.
This normalization helps the CNN learn more efficiently. For each category, a corresponding numeric label is created, and all images and labels are combined into unified arrays suitable for model training.
After preprocessing, the function splits the dataset into training, validation, and test sets using an 80–10–10 split (80% training, 10% validation, 10% test).
Labels are converted to one-hot encoded vectors, which is necessary for multi-class classification with categorical cross-entropy loss. The returned arrays X_train, X_val, X_test, y_train, y_val, y_testare ready for feeding into a CNN, allowing the model to learn patterns from the doodles, validate its performance during training, and finally evaluate accuracy on unseen test data.
Step 4: Visualizing the Doodles
Before training, it’s fun and important to see what we’re working with:
plt.figure(figsize=(12, 6))
for i in range(10):
plt.subplot(2, 5, i+1)
plt.imshow(X_train[i].reshape(28, 28), cmap='gray')
plt.title(CATEGORIES[np.argmax(y_train[i])])
plt.axis('off')
plt.tight_layout()
plt.show()
You’ll see little sketches of cats, cars, trees, and more.

Step 5: Building the CNN Model
A Convolutional Neural Network (CNN) is a type of AI that is excellent at recognizing images. Think of it like this:
- Convolution layers → Detect patterns (lines, curves, shapes)
- Pooling layers → Reduce image size to focus on important features
- Dropout layers → Prevent overfitting (helps the AI generalize better)
- Dense layers → Make the final decision about which category the image belongs to
Here’s our model:
model = create_improved_model(input_shape=(28,28,1), num_classes=len(CATEGORIES))
model.summary()
Our CNN model consists of five convolutional layers, each followed by batch normalization and dropout layers. The convolutional layers act as hierarchical feature extractors, progressively learning from simple patterns like lines and edges in the first layers to more complex shapes and textures in deeper layers.
Batch normalization stabilizes the learning process by normalizing the outputs of each layer, which helps the network train faster and more reliably.
Dropout layers randomly deactivate a fraction of neurons during training, preventing the model from overfitting and ensuring it generalizes well to unseen doodles.
After the convolutional and pooling layers have extracted meaningful features, the network flattens the output into a one dimensional vector and passes it through dense (fully connected) layers. These dense layers integrate all the extracted features to make the final prediction, determining the category of the doodle.
In total, the model has approximately 540,000 trainable parameters, a carefully chosen size that balances computational efficiency and learning capacity. This architecture enables the CNN to effectively learn and differentiate between doodle categories such as cats, cars, trees, and more, while maintaining strong generalization performance on new, unseen drawings.
Step 6: Data Augmentation
When training a neural network, one common challenge is that the model might memorize the exact training images rather than learning the general patterns. This is called overfitting, and it leads to poor performance on new, unseen data. One powerful technique to combat this is data augmentation, which artificially expands the dataset by creating slightly modified versions of existing images.
In our project, we use image transformations such as:
- Rotation: The doodle is rotated slightly (e.g., ±10 degrees). This helps the model recognize sketches even if the user draws them at a slight angle.
- Zoom: The image is scaled up or down slightly. This ensures that the model can handle doodles of different sizes.
- Width and Height Shifts: The doodle is moved slightly left/right or up/down. This prevents the model from being sensitive to the exact placement of the drawing in the canvas.
- Shear (Tilt): The image is tilted slightly, simulating minor distortions that might occur when a user draws freely.
datagen = ImageDataGenerator(
rotation_range=10,
zoom_range=0.1,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1
)
This helps the model recognize doodles even if they’re drawn slightly differently.
Step 7: Training the Model
Once the data is preprocessed and the model is defined, the next step is training the CNN. Training means letting the model learn patterns from the doodles by adjusting its internal parameters (weights) to minimize errors in predictions. This process involves feeding the network batches of images and updating the weights using an optimization algorithm like Adam.
To make training more efficient and prevent overfitting, we use callbacks special functions that monitor the training process and take actions automatically,
- EarlyStopping: If the model stops improving on the validation set for several epochs, training is halted. This prevents wasting time and reduces overfitting by not training longer than necessary.
- ModelCheckpoint: This saves the model’s weights whenever the validation accuracy improves. At the end of training, we have the best performing version of the model saved.
- ReduceLROnPlateau: If the model stops improving, this callback reduces the learning rate. A smaller learning rate helps the network make finer adjustments to its weights and escape plateaus during training.
history = model.fit(
datagen.flow(X_train, y_train, batch_size=128),
steps_per_epoch=len(X_train)//128,
epochs=50,
validation_data=(X_val, y_val),
callbacks=callbacks
)
After training, we achieve ~95% test accuracy, which is excellent for doodle recognition.
Step 8: Evaluating the Model
After training, the next crucial step is evaluating the CNN to understand how well it can recognize new doodles it has never seen before. This helps us verify that the model has learned meaningful patterns rather than just memorizing the training data.
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=0)
print(f"Test accuracy: {test_acc:.4f}")
- X_test and y_test contain doodles and their corresponding labels that the model hasn’t seen during training.
- test_loss indicates how far off the model’s predictions are from the true labels.
- test_acc shows the fraction of correct predictions. In our case, the model typically achieves ~95% accuracy, meaning it correctly identifies 95 out of 100 doodles on average.
We also visualize:
- Accuracy and Loss over epochs
- Confusion matrix to see which categories are confused

sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=CATEGORIES, yticklabels=CATEGORIES)

Finally, we display some predictions vs. true labels to see the model in action.

Step 9: Saving the Model
Once happy, we save our trained model for future use:
model.save('best_doodle_model.h5')
You can now load this model anytime to make predictions.
Step 10: What’s Next? Deploying DoodlAI
With the model ready, the next step is building a web application:
- Backend: FastAPI + TensorFlow/Keras → Serve predictions in real-time
- Frontend: React → Canvas for drawing, game/free draw modes
- Deployment: Host on a web server, making it accessible to everyone
Users can then draw doodles on the web, and the AI will instantly predict the category with a confidence score.


Conclusion
DoodlAI is a fun and beginner-friendly project to learn deep learning and AI deployment. You’ll understand:
- How CNNs recognize images
- How to preprocess and augment data
- How to train and evaluate models
- How to save and deploy an AI model
It’s an exciting way to combine coding, AI, and creativity!
DoodlAI- Build a Real-Time Doodle Recognition AI with CNN was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Learning Triton One Kernel At a Time: Vector Addition
The basics of GPU programming, optimisation, and your first Triton kernel
The post Learning Triton One Kernel At a Time: Vector Addition appeared first on Towards Data Science.
What Clients Really Ask for in AI Projects
Managing AI projects is no walk in the park, but you have the power to make it easier for everyone
The post What Clients Really Ask for in AI Projects appeared first on Towards Data Science.