BLOG POST

AI 日报

120 min
2025年9月29日
日报 · AI · 行业观察

30 years later, I’m still obliterating planets in Master of Orion II—and you can, too

There's an unparalleled purity to MOO2's commitment to the fantasy.

150 million-year-old pterosaur cold case has finally been solved

The storm literally snapped the bones in their wings.


More people are using AI in court, not a lawyer. It could cost you money – and your case

Researchers found more than 80 cases of generative AI use in Australian courts so far – mostly by people representing themselves. That comes with serious risks.

Generative AI might end up being worthless — and that could be a good thing

GenAI does some neat, helpful things, but it’s not yet the engine of a new economy — and it might not ever be.


The Guardian view on AI and jobs: the tech revolution should be for the many not the few | Editorial

Britain risks devolving its digital destiny to Silicon Valley. As a TUC manifesto argues, those affected must have a greater say in shaping the workplace of the future

In The Making of the English Working Class, the leftwing historian EP Thompson made a point of challenging the condescension of history towards luddism, the original anti-tech movement. The early 19th-century croppers and weavers who rebelled against new technologies should not be written off as “blindly resisting machinery”, wrote Thompson in his classic history. They were opposing a laissez-faire logic that dismissed its disastrous impact on their lives.

A distinction worth bearing in mind as Britain rolls out the red carpet for US big tech, thereby outsourcing a modern industrial revolution still in its infancy. Photographers, coders and writers, for example, would sympathise with the powerlessness felt by working people who saw customary protections swept away in a search for enhanced productivity and profit. Unlicensed use of their creative labour to train generative AI has delivered vast revenues to Silicon Valley while rendering their livelihoods increasingly precarious.

Continue reading...

‘To them, ageing is a technical problem that can, and will, be fixed’: how the rich and powerful plan to live for ever

When Xi Jinping and Vladimir Putin were caught on mic talking about living for ever, it seemed straight out of a sci-fi fantasy. But for some death is no longer considered an inevitability …

Imagine you’re the leader of one of the most powerful nations in the world. You have everything you could want at your disposal: power, influence, money. But, the problem is, your time at the top is fleeting. I’m not talking about the prospect of a coup or a revolution, or even a democratic election: I’m talking about the thing even more certain in life than taxes. I’m talking about death.

In early September, China’s Xi Jinping and Russia’s Vladimir Putin were caught on mic talking about strategies to stay young. “With the development of biotechnology, human organs can be continuously transplanted, and people can live younger and younger, and even achieve immortality,” Putin said via an interpreter to Xi. “There’s a chance,” he continued, “of also living to 150 [years old].” But is this even possible, and what would it mean for the world if the people with power were able to live for ever?

Continue reading...


NBA Coach JJ Redick Says He Spends Hours Talking to His “Friend” ChatGPT

"I'm the type of person who, y'know, spends an hour and a half going down a deep, deep rabbit hole on ChatGPT."

The post NBA Coach JJ Redick Says He Spends Hours Talking to His “Friend” ChatGPT appeared first on Futurism.

OpenAI’s New Data Centers Will Draw More Power Than the Entirety of New York City, Sam Altman Says

"Ten gigawatts is more than the peak power demand in Switzerland or Portugal."

The post OpenAI’s New Data Centers Will Draw More Power Than the Entirety of New York City, Sam Altman Says appeared first on Futurism.

Elon Musk Is Fuming That Workers Keep Ditching His Company for OpenAI

His blood feud with Sam Altman rages on.

The post Elon Musk Is Fuming That Workers Keep Ditching His Company for OpenAI appeared first on Futurism.

Residents Shut Down Google Data Center Before It Can Be Built

Google Fail

The post Residents Shut Down Google Data Center Before It Can Be Built appeared first on Futurism.

AI Coding Is Massively Overhyped, Report Finds

"The results haven’t lived up to the hype."

The post AI Coding Is Massively Overhyped, Report Finds appeared first on Futurism.

SAP Exec: Get Ready to Be Fired Because of AI

"I will be brutal."

The post SAP Exec: Get Ready to Be Fired Because of AI appeared first on Futurism.

First Responders Are Being Overwhelmed by Data Center Fires

"We're not a huge fan of the data centers."

The post First Responders Are Being Overwhelmed by Data Center Fires appeared first on Futurism.


Quantum chips just proved they’re ready for the real world

Diraq has shown that its silicon-based quantum chips can maintain world-class accuracy even when mass-produced in semiconductor foundries. Achieving over 99% fidelity in two-qubit operations, the breakthrough clears a major hurdle toward utility-scale quantum computing. Silicon’s compatibility with existing chipmaking processes means building powerful quantum processors could become both cost-effective and scalable.


BYD Brings Price War to Japan in Bid to Win Over Customers

More than two years after BYD Co.’s high-profile foray into the Japanese market, the Chinese electric vehicle maker is still struggling to win over drivers.

EA Buyout Talk Highlights Gaming Struggles as Growth Slows

The gaming market has matured and analysts see slower growth moving forward

Apple’s ChatGPT-Style Chatbot App Deserves a Public Release

Apple should release its internal ChatGPT-like app publicly to give its revamped AI system more credibility. Also: New MacBooks and external Mac monitors get closer; more on the iPhone 17 Pro’s “scratchgate” controversy; and Tim Cook’s latest memo to employees.


I don't tell my kids I'll miss them when I travel without them. It's the truth.

I love my kids but also enjoy time without them. I'm a better parent when I come back because I'm reminded of who I am besides mom.

Eric Adams drops out of the New York mayor's race weeks ahead of the election

Adams announced his departure on Sunday in an eight-minute video on X, saying media speculation and lack of funding influenced his decision.

Ex-Twitch CEO's advice for leaders: Don't over-delegate or forget you can override your experts

Ex-Twitch CEO Emmett Shear said a CEO's job is not only to delegate but also to discern: "Is this the kind of decision that we have to get right?"

I've stayed in all kinds of places across 100 countries, but there's still one type of accommodation I never book

During my travels to over 100 countries, I've stayed in hotels, hostels, and unique spots. But there are reasons I never book all-inclusive resorts.

I made pumpkin bread with just 2 ingredients. It reminded me of my favorite Starbucks treat.

The recipe calls for just one box of spice cake mix and one 15-ounce can of 100% pumpkin purée.

My job offers little chance for career growth, but I'm sticking with it. It gives me the flexibility I need as a parent.

As a parent, I've had to choose between looking for a job that pays more or staying at a job that offers flexibility.

My son moved home because of the high cost of living and a low-paying, entry-level job. I never got to be an empty nester.

My son just graduated from college and landed a great entry-level job, but the pay isn't great. He decided to move back home to save money.

My family moved from Vancouver to Toronto for a few months to live with my parents. There have been pros and cons.

Living with my parents in Toronto for the summer rent-free has given them priceless time with their granddaughter. The pros have outweighed the cons.

I don't get the whole Costco craze

In this Sunday edition of Business Insider Today, we're talking about America's Costco craze. BI's Steve Russolillo doesn't get the hype.

I'm a morning show contributor, and my husband is a firefighter. My daughter's grandparents make our nontraditional careers work.

My daughter isn't "watched" while my husband and I are working; she's played with, cared for, and loved on a level that can't be described.

What will Charlie Javice's sentence be for her $175M defrauding of JPMorgan Chase? Much depends on the word 'loss.'

Charlie Javice says she deserves a low sentence and restitution because JPMC gained some value from its otherwise fraud-based purchase of Frank.

We're first-time hybrid homeschoolers. We receive a stipend and spend more time with our kids — being adaptable is what makes it work.

Hybrid homeschoolers Marcus and Hannah Ward save money and spend more time with family with charter school classes for their daughter.

I've worked in global banking for 25 years. These are the 6 most important pieces of financial advice I tell family and friends.

Racquel Oden, US head of wealth and private banking at HSBC, shares how to start saving immediately and prioritize investments over student loans.

I flew sitting in a windowless window seat, and was surprised to find it might be the best spot on the plane for a power nap

A windowless window seat might sound like one of the worst places on a plane, but I was surprised to find it made for a decent in-flight nap.

I've interned at IBM since high school. It's taught me 3 key lessons about building a career in tech.

Gogi Benny shares his experience in tech, living with neurofibromatosis, and advancing as an IBM intern after starting in high school.

Leading computer science professor says 'everybody' is struggling to get jobs: 'Something is happening in the industry'

UC Berkeley professor Hany Farid said the advice he gives students is different in the AI world.

I went to the Ryder Cup and calculated the eye-watering cost of spending a single day there

Between the $32 cocktail, the seemingly endless merch options, and the temptation of an Uber, the cost of attending the Ryder Cup can add up.

Welcome to the Great Silencing

CEOs were already cautious about speaking their minds. Now, they're becoming even more tight-lipped.

3 reasons the US can't count on wealthy Americans to keep the economy going strong

Wealthy Americans may not be able to power the economy with spending as much as some people think, BCA Research says.

Goldman's tech boss discusses the future of AI on Wall Street — and how it will reshape careers

Goldman Sachs' chief information officer, Marco Argenti, discusses his vision for AI and its impact on his 12,000-person engineering team.


Flip Samurai – Learn Anything with Flashcards

This is a submission for the KendoReact Free Components Challenge.

For the KendoReact Free Components Challenge, I built Flip Samurai, a flashcard learning app that helps you master any subject through spaced repetition.

👉 Live Demo: flipping-samurai.vercel.app
👉 Source Code: GitHub Repo

Even though I don’t have much frontend experience, building with KendoReact made the process surprisingly smooth. Its free components gave me everything I needed to create a polished, responsive, and accessible UI without getting stuck on small details.

  • Collections – Group flashcards by topic.
  • Folders – Organize collections into folders.
  • Favorites – Mark your most important collections.
  • AI-Generated Collections – Instantly create flashcards with AI (powered by Fastify + Google Gemini).
  • Import/Export – Back up and share your study sets.
  • Dashboard – Track progress, study sessions, and mastery levels.
  • Cards to Review – Stay on top of spaced repetition.

All data is stored in LocalStorage on the frontend, making it simple and lightweight to use.

  • React + Vite + TypeScript
  • KendoReact Free Components
  • Bootstrap
  • Fastify (backend for AI generation)
  • Google Gemini API

I made heavy use of KendoReact’s free UI components to make the interface clean, fast, and user-friendly. Some of the key components include:

  • Buttons (start studying, reviewing, AI generation, etc.)
  • Dialogs (confirmation modals and AI generation flow)
  • Notifications (success/error feedback)
  • Cards & Layout (collection previews and dashboard)
  • Inputs & Labels (create/edit collections and cards)
  • Grid (statistics and progress overview)
  • Indicators & ProgressBars (study progress)
  • Tooltip, Skeleton, Dropdowns, ListBox, Popup, and SVG Icons for extra polish.

This variety of components really helped me design a smooth user experience quickly.

Collections Page
Manage and explore your flashcard collections.

Study Mode
Study cards with progress tracking.

Dashboard
Track your learning journey with detailed stats.

Folders Page
Group your collections with folders

I integrated AI-powered collection generation: users can input a topic, and the backend (Fastify + Gemini) generates a full flashcard set automatically. This feature saves time and makes the app more dynamic.

Even as someone with little frontend background, I felt productive and creative with KendoReact. The ready-to-use components removed a lot of friction and let me focus on building features, not fighting the UI.

👉 Try it here: flip-samurai.vercel.app

MLZC25-01. Introducción al Aprendizaje Automático: ¿Qué es y por qué importa?

Cuando escuchamos "Machine Learning" o "Aprendizaje Automático", muchas veces pensamos en robots inteligentes o sistemas que parecen tener vida propia. Pero la realidad es mucho más fascinante y accesible de lo que imaginamos.

El aprendizaje automático es una rama de la inteligencia artificial que permite a las computadoras aprender y tomar decisiones a partir de datos, sin ser programadas explícitamente para cada tarea específica.

Piénsalo así: en lugar de escribir miles de líneas de código para que una computadora reconozca un gato en una foto, le mostramos miles de fotos de gatos y otros animales, y la computadora "aprende" por sí misma a distinguirlos.

En nuestra vida cotidiana:

  • Recomendaciones de Netflix: ¿Te has preguntado cómo Netflix sabe exactamente qué película te va a gustar? ML analiza tus patrones de visualización.
  • Navegación GPS: Los mapas inteligentes que predicen el tráfico y encuentran las rutas más rápidas.
  • Detección de spam: Tu correo electrónico filtra automáticamente los mensajes no deseados.
  • Asistentes virtuales: Siri, Alexa y Google Assistant entienden y responden a tus comandos.

En la industria:

  • Medicina: Diagnóstico de enfermedades a través de imágenes médicas.
  • Finanzas: Detección de fraudes en transacciones bancarias.
  • Agricultura: Optimización de cosechas y predicción de plagas.
  • Transporte: Vehículos autónomos que navegan por las calles.

1. Datos

Sin datos, no hay aprendizaje. Los datos son el "alimento" que permite a los algoritmos aprender patrones y tomar decisiones.

2. Algoritmos

Son las "recetas" matemáticas que procesan los datos para encontrar patrones y hacer predicciones.

3. Modelos

Son el resultado del proceso de aprendizaje: una representación simplificada de la realidad que puede hacer predicciones sobre datos nuevos.

1. Demanda creciente

El mercado laboral busca desesperadamente profesionales con habilidades en ML. Es una de las profesiones mejor pagadas del sector tecnológico.

2. Accesibilidad

Herramientas como Python, scikit-learn y TensorFlow han democratizado el acceso al ML. Ya no necesitas ser un genio matemático para empezar.

3. Impacto real

Puedes crear soluciones que realmente mejoren la vida de las personas, desde diagnósticos médicos más precisos hasta sistemas que optimicen el consumo energético.

Al comenzar este viaje, es importante entender que el Machine Learning no es magia. Es una herramienta poderosa, pero también requiere:

  • Pensamiento crítico: Los datos pueden estar sesgados, los modelos pueden ser injustos.
  • Curiosidad: Siempre preguntarse "¿por qué funciona esto?" y "¿qué podría salir mal?"
  • Ética: Recordar que nuestras decisiones algorítmicas afectan vidas reales.

En los siguientes posts exploraremos:

  • Los diferentes tipos de aprendizaje automático
  • Las herramientas que necesitas para empezar
  • Cómo Python se convirtió en el lenguaje favorito para ML
  • La importancia del análisis exploratorio de datos
  • Técnicas de preprocesamiento
  • Y reflexionaremos sobre nuestra primera tarea práctica

Pregunta para ti: Piensa en tu día a día. ¿Qué actividades realizas que podrían beneficiarse del Machine Learning? ¿Qué problemas te gustaría resolver usando datos y algoritmos?

¿Te emociona este viaje? ¡Estamos apenas comenzando! En el próximo post exploraremos los diferentes tipos de aprendizaje automático y cuándo usar cada uno.

Image Flow Editor

This is a submission for the KendoReact Free Components Challenge.

I am not a designer nor an image editor. However, I find myself going to Google every now and then, searching on background remover, light enhancing, etc. I go to Figma as well to add some effects, or apply masks or whatever.

As a casual image editor, I need what works right a way with the fewest clicks possible. I am not willing to invest my time on learning something I need few times a month.

Figma is great. However, when any tool start to get traction. The team behind it start to seek growth and domination. Figma and Canvas started as simple tools that do one or two things. Now, they require a learning curve to achieve things.

These days, I find myself going to Excalidraw to be honest to draw my creatives for posts or what not. Why? I believe on the saying: "With greater power comes greater responsibilities." Apply to this field and it becomes: "With more options, comes more distractions." For a simple remove background task and add stroke to make stump like. You start exploring the never ending plugins ecosystem searching for one who does both tasks. A task of three minutes max turns to a full day of trying different options, signing to countless services for an API key. Of course, days after that, you still didn't edit your image and forget about the project all over.

The Concept

I imagine myself as the main end user:

  1. I can upload multiple images.
  2. Add some tasks or editing flow as drag and drop.
  3. Hit compile or run.
  4. Get the result files (preview, download, download all).

It should be as simple as it was described.

Demo Video:

You can test the app from these two domains:

You can also find the repository for this project here.

The components used in this project are:

  • kendo-react-common
  • kendo-react-intl
  • kendo-react-buttons
  • kendo-react-inputs
  • kendo-react-dropdowns
  • kendo-react-dialogs
  • kendo-react-notification
  • kendo-react-indicators
  • kendo-react-animation
  • kendo-popup-common
  • kendo-react-popup
  • kendo-react-layout
  • kendo-react-labels
  • kendo-react-data-tools
  • kendo-react-charts

I don't have much time for the challenge. Even though I have ten years of experience with the Kendo UI and Telerik. I mainly used jQuery version for dashboard related work. Still, five days is not enough time for me to come with an idea, code it and write about it while doing my work.

Since the trend this year is all around Vibe Coding, Coding Assistants, Agents, AI Editors. Why not go this road and Vibe Code my idea for this challenge?

I tried a couple of these Vibe Coding platforms before. However, the main problem I usually face is their limitations. Meaning, if you let the agent code the way it knows how. It will give you working prototypes fast and almost perfect. However, when you ask them to use a certain library or framework, they struggle with that and only bring you blood pressure up.

So, I started with Google AI Studio Build. It proved itself as reliable when it comes to using libraries with React and Angular. However, this time it wasn't efficient. It kept showing errors that it couldn't fix. I even deleted the project and started from scratch. The Kendo MCP configuration doesn't seem to work at first in this environment neither.

Screenshot from the ai studio.

So, I downloaded the code base and opened my ZED editor. I configured the Kendo MCP and started prompting.

The first iteration was easy. I prompted Zed to fix the issues. It did very quickly. The problem was in importing react in multiple places from different CDNs. I updated the AI Studio code base and it worked.

Later on, things started to get very nasty. Both Zed & Ai Studio were needed for different tasks. I tried to stop using AI Studio, but the API key calls from my local version kept preventing me from doing anything claiming that I exceeded the limits or whatever.

At the end, I continued on AI Studio. I added the Kendo MCP configuration in its settings. Luckily for me, it started using it to access the latest documentation and recognizes my prompts with ease. It even fixed an issue related to license while using a premium feature of the dropdown component. Same happened with the button at certain point. I wonder if the MCP can pinpoint the paid features in free components without making a mistake and I had to point to it. Of course, I didn't point to the component causing the issue. I only gave my feedback about the licensing strips showing in the dialog form.

I have to say. I never felt cold adding a feature like in this project. The AI Studio ruined my app two times while adding a simple button. It's unfortunate that the GitHub integration isn't working correctly. So I had to download a version every now and then.

Let's be honest. Kendo UI shines in dashboards. In my previous job, we decided to buy it because of how straightforward building dashboards was with the Data Grid, Charts, Editors and so on.

After the first prompt about the dashboard. I confirmed all my fears. It crashed and I had to track errors and fix them. The same error keeps happening whenever I add a new component. Ai Studio uses ESM CDN for its packages management. The problem in versioning. So I had to check the right version for each component or package.

I was hoping to use the tiling layout and the data grid components. Unfortunately,the little I am doing with them shows the licensing strips. So I removed them all together.

Even though I prefer Cloudflare these days. I chose Vercel to host this project. However, to deploy it outside of AI Studio I have to do some changes.

The first thing I did was removing adding a new option for the user to add his own Gemini API KEY. This way, anyone can use the app without me worrying about credits and so on. Of course, this will change if the app gets to a certain point.

Once again, I used only free components. However, the dialog for the API configuration shows the license strips!!! I was confused, so I prompted Claude in Zed to use the MCP and investigate the issue. Unfortunately, the investigation didn't lead to a good result. Since Claude went and deleted the Kendo theme and did other unnecessary stuff. Then he went out of service because I exceeded the limit of using it.

I switched the model to ChatGPT 5 mini through GitHub Copilot and prompted:

in the @ApiKeyConfig.tsx file there something that make Kendo add the license strips. Please use the kendo react mcp to investiage this. Don't do anything until you ask

This model did a better job with the Kendo MCP. It asked for a list of all premium components. Then it asked for free components with premium features. Compiled a plan for me on how to fix the issue. After executing the plan the license strips gone.

I started this project for fun. After spending almost five days working on it. I am planning to continue enhancing it to see how it turns out. I added a todo list to the README file of this project. But that's just the starting point. But first, let's see how it perform in this challenge.

Data Analyses — Wizard

This is a submission for the KendoReact Free Components Challenge.

Data Analyses — Automate Wizard (KendoReact Challenge submission)

I built Data Analyses — Automate Wizard, a small React app that helps users import tabular files (CSV/XLSX), automatically analyzes the dataset, and generates polished, accessible charts and dashboard cards using KendoReact Free Components.

The app’s goal is to let non-technical users get immediate insights from an uploaded spreadsheet:

  • auto-detects numeric / date columns,
  • suggests the most useful charts and small dashboard cards (totals, averages, top categories),
  • aggregates and prepares data for charts (bar, line, pie, donut, area),
  • offers a Chart Wizard for manual column-to-chart mapping,
  • accepts exported charts (PDF, PNG, SVG) and lets users pin them to the dashboard.

I focused on usability (one-click suggestions + preview), accessibility, and making the analysis pipeline safe (AI only receives a compact column summary / sample, not full PII).

Screenshots and the Loom video are in the repo README. The video demonstrates uploading a CSV, the AI suggestions panel, auto-generated charts/cards, and the Chart Wizard flow.

  • Upload CSV / XLSX files and preview sample rows.
  • Automatic dataset analysis (sample-based) using a generative AI assistant (Gemini) that recommends charts and cards.
  • Auto-aggregation helpers (sum/avg/count, time-series aggregation with month-year/year granularity).
  • Interactive Chart Wizard integration so users can pick fields and build charts manually if desired.
  • Chart gallery: Bar, Line, Area, Pie, Donut (all generated from the same pipeline).
  • Upload/export support for images/PDFs (pin exported charts to dashboard).
  • Skeletons + progressive UI while parsing/aggregating.
  • Lightweight theme scoping so Kendo styling is applied only where needed.

I used multiple free KendoReact components to build the UI and charts. Key components used in the project:

  • Input — file picker UI (from @progress/kendo-react-inputs)
  • Chart and chart subcomponents (from @progress/kendo-react-charts):

    • Chart (root)
    • ChartSeries / ChartSeriesItem
    • ChartCategoryAxis / ChartCategoryAxisItem
    • ChartValueAxis / ChartValueAxisItem
    • ChartTitle
    • ChartSeriesLabels
    • ChartTooltip / ChartNoDataOverlay
  • ChartWizard (from @progress/kendo-react-chart-wizard) — interactive mapping wizard for users to create charts from table data

  • Skeleton (from @progress/kendo-react-indicators) — loading placeholders while files parse

  • Tooltip (from @progress/kendo-react-tooltip) — contextual help and quick actions

  • (plus some Kendo “intl” helpers where needed for formatting)

Note: many Chart subcomponents are imported from the charts package (e.g., ChartSeriesItem, ChartCategoryAxisItem, etc.). Together they provide the full charting capabilities used across the app.

  • I used Google Generative AI (Gemini) via a small client wrapper to analyze a sample of the uploaded data (column names + a few sample values and summary stats). The prompt instructs the model to return JSON only with:

    • recommended charts (type, groupBy, metric, aggregation, topN, granularity)
    • recommended small dashboard cards (specs only: field, aggregation, label, format).
  • The frontend validates the AI response, then computes actual numbers locally (e.g., sums/averages/top categories) to avoid trusting the model for numeric calculations and to keep the source-of-truth on the client data.

  • This approach lets the AI do lightweight analysis and suggestions while the app remains in control of real computations (privacy and correctness).

(If you prefer, this submission is not entered for the Kendo AI Coding Assistant prize — I used Gemini instead. No Nuclia RAG integration is included.)

# clone
git clone https://github.com/MatheusDSantossi/data-analyses-automate.git
cd data-analyses-automate

# install
npm install

# start dev server
npm run dev
# open http://localhost:5173 (Vite default)

Environment

  • Add your Gemini key to .env:
VITE_GEMINI_API_KEY=<your-gemini-key>

Notes

  • The app uses ExcelJS to read xlsx files and a CSV parser helper for CSV. It also uses Kendo’s default theme — I included an optional scoped theme option so Kendo styles only apply inside the dashboard container.
  • For the AI assistant I only send column summaries (no raw rows). If you plan to demo with real data containing PII, please exercise caution.
  • Use of underlying technology — Chart/wizard powered by KendoReact Charts & ChartWizard; Kendo components used for inputs/UX; dataset parsing with ExcelJS; AI-assisted recommendations.
  • Usability & User Experience — immediate suggestions, skeletons while parsing, Chart Wizard for manual adjustments, export previews (PDF/Image), responsive grid layout.
  • Accessibility — charts have labels and legends, skeletons are presentational only, and UI controls use semantic elements (and aria-* where appropriate).
  • Creativity — combining a generative model to suggest charts/cards + a Chart Wizard for manual exploration makes analysis quick while preserving user control.
  • src/pages/Dashboard.tsx — main dashboard, AI integration, aggregation orchestration, generated-charts renderer
  • src/utils/wizardData.ts — helpers to transform rows into wizard-ready { field, value } arrays and aggregation helpers
  • src/utils/aiAnalysis.ts — build prompt, call AI wrapper (getResponseForGivenPrompt) and safe JSON parsing
  • src/components/charts/*BarChart, LineChart, DonutChart, PieChart, AreaChart components (Kendo-powered)
  • src/components/Wizard.tsx — ChartWizard integration

Augmented Intelligence (AI) Coding using Markdown Driven-Development

TL;DR: Deep research the feature, write the documentation first, go YOLO, work backwards... Then magic. ✩₊˚.⋆☾⋆⁺₊✧

I my last post, I outlined how I was using Readme-Driven Development. In this post, I will outline how I implemented a 50-page RFC over the course of a weekend.

My steps are:

Step 1: Design the feature documentation with an online thinking model
Step 2: Export a description-only "coding prompt"
Step 3: Paste to an Agent in YOLO mode (--dangerously-skip-permissions)
Step 4: Force the Agent to "Work Backwards"

Open a new chat with an LLM that can search the web or do "deep research". Discuss what the feature should achieve. Do not let the online LLM write code. Create the user documentation for the feature you will write (e.g., README.md or a blog page). I start with an open-ended question to research the feature. That will prime the model. Your exit criteria is that you like the documentation or promotional material enough to want to write the code.

To exit this step, have it create a "documentation artefact" in markdown (e.g. the README.md or blog post). Save that to disk so that you can point the coding agent at it.

If you don't want to pay for a subscription for an expensive model can install Dive AI Desktop and use pay-as-you-go models of much better value. Here is a video on setting up Dive AI to do web research with Mistral:

Next, tell the online model to "create a description only coding prompt (do not write the code!)". Do not accept the first answer. The more effort you put into perfecting both the markdown feature documentation and the coding prompt, the better.

If the coding prompt is too long, then the artefact is too big! Start a fresh chat and create something smaller. This is Augmented Intelligence ticket grooming in action!

Please paste in the groomed coding prompt and the documentation, and let it run. I always use a git branch so that I can let the agent go flat out. Cursor background agents, Copilot agents, OpenHands are all getting better.

I only restrict git commit and git push. I ask it first to make a GitHub issue using the gh cli and tell it to make a branch and PR.

The models love to dive into code, break it all, get distracted, forget to update the documentation, hit compaction, and leave you with a mess. Do not let them be a caffeine-fuelled flying squirrel!

The primary tool I am using now prints out a Todos list. This is usually the opposite of the correct way to do things safely!

⏺ Update Todos
  ⎿ ☐ Remove all compatibility mode handling
     ☐ Make `{}` always compile as strict
     ☐ Update Test_X to expect failures for `{}`
     ☐ Add regression test Test_Y
     ☐ Add INFO log warning when `{}` is compiled
     ☐ Update README.md with Empty Schema Semantics section
     ☐ Update AGENTS.md with guidance

That list is in a perilous order. Logically, it is this:

  1. Delete logic (broken code, invalid tests)
  2. Change logic (more broken code, more invalid tests)
  3. Change one test (which is mostly to what you are doing)
  4. Add one test (finally! the objective!)
  5. Change the README.md and AGENTS.md

If the agent context compacts, things go sideways, you get distracted, and you will end up with a bag of broken code.

I set it to "plan mode", else immediately interrupted it, to reorder the Todo list:

  1. Change the README.md and AGENTS.md first
  2. Add one test (insist the test is not run yet!)
  3. Change one test (insist the test is not run yet!)
  4. Add/Change logic
  5. Now run the tests
  6. Delete things last

I am not actually a big fan of the built-in Todos list of the two big AI labs. The models really struggle with any changes to the plan. The Kimi K2 Turbo seems more capable of pivoting. I have a few tricks for that, but I will save them for another post.

This past weekend I decided to write an RFC 8927 JSON Type Definition validator based on the experiemental JDK java.util.json parser. The PDF of the spec is 51 pages. There is a ~4000-line compatibility test suite. A jQwik generates 1000 random JTDs, which would cause several bugs. The total set of unit test written was 509.

Using a single model family is a Bad Idea (tm). For online research I alternate between full-fat ChatGPT Desktop, Claude Desktop and Dive Desktop to use each of GPT5-High, Opus 4.1 or Kimi K2 Turbo turn.

For Agents I have used all the models and many services. Microsoft kindly allows me to use full-fat Copilot with Agents for an open-source projects for free ❤️ I have a cursor sub to use their background agents. I use Codex, Code, and Gemini. The model seems less important than writing the documentation first and writing tight prompts. I am currently using an open weights model at $3 per million tokens for the heavy lifting as pay-as-you-go yet cross check its plans with GPT5 and Sonnet.

Rewrite History: Your Omniscient View of the Past

This is a submission for the KendoReact Free Components Challenge.

Hello Dev Community! 👋

Teammate: @sri_charan_5b9c2e5e77b8d4

We are thrilled to share Rewrite History: Your Omniscient View of the Past, an interactive game where history isn’t just read—it’s lived, shaped, and rewritten. Built using Kendo React components, Node.js, and the Nuclia RAG model, the game blends learning, storytelling, and strategy into a fully dynamic experience.

🕹️ Game Concept
• Choose an era: Ancient Rome, Renaissance, 20th century, and more
• Pick a historical character to play
• Make decisions that affect stats like influence, wealth, relationships, and health
• Every journey is dynamically generated and can follow history or diverge into alternate outcomes
• At the end, your story is compiled into a personalized book
Example choices generated by the game:
• “What if Napoleon chose peace instead of war?”
• “Could a scientist share a discovery earlier and change history?”

📖 Key Features
Dynamic Timeline Representation
Visualizes the main historical timeline alongside branches created by your choices, so you can track both actual history and your alternate paths.
Summary Book
Automatically collects all events of a particular timeline. You don’t have to dig through pages—you can read your journey as a narrative or explore others’ stories.
Nuclia RAG Model Integration
• Historical events and character data are uploaded to Nuclia RAG.
• When generating the next event, the model fetches contextual historical data.
• Combined with your character’s current stats and attributes, this data is sent to an LLM that outputs three dynamic choices, making the gameplay unpredictable and engaging.

Youtube video link: https://youtu.be/e_5GR61W2pI?si=5VxDBaPLJ9BDqtMu
📸 Screenshots
Homepage

Era Selection

Character Selection

Game Dashboard / Stats Overview

Timeline & Choices

Story Summary / Alternate History

Other Pages
• Player Books Page

• Input (from @progress/kendo-react-inputs)

• TextArea (from @progress/kendo-react-inputs)

• Rating (from @progress/kendo-react-inputs)

• AppBar (from @progress/kendo-react-layout)

• AppBarSection (from @progress/kendo-react-layout)

• Avatar (from @progress/kendo-react-layout)

• Button (from @progress/kendo-react-buttons)

• Card (from @progress/kendo-react-layout)

• CardHeader (from @progress/kendo-react-layout)

• CardBody (from @progress/kendo-react-layout)

• ProgressBar (from @progress/kendo-react-progressbars)

• Fade (from @progress/kendo-react-animation)

• Badge (from @progress/kendo-react-indicators)

How Nuclia Rag model is used
Our interactive historical storytelling game is powered end-to-end by Nuclia’s Retrieval-Augmented Generation (RAG) model. Instead of writing static branching storylines, we give the model structured historical data and let it retrieve and generate content on the fly.

Here’s our process:

Building the Knowledge Base
We upload detailed, real-world historical data about each character:
– Chronological events
– Age, stats, and personality traits
– Political, social, and cultural contexts

This information becomes our searchable knowledge base inside Nuclia.

Retrieval + Generation as a Single Step
When the player reaches a new point in the story, we send the player’s current state (age, stats, personality, current event) to Nuclia’s RAG endpoint.

The RAG model automatically:

Retrieves the most relevant context from our uploaded knowledge.

Generates historically grounded yet dynamic choices.

The Nuclia RAG model returns three plausible next events or actions the player can take. These aren’t pre-coded. They’re dynamically generated based on:

The player’s previous decisions

The character’s evolving stats and personality

Authentic historical information from the knowledge base

Replayability and Scalability
Because the RAG model merges retrieval with generation, our game can scale infinitely. It also ensures that each playthrough is different while still staying faithful to history.

⚙️ Tech Stack
Frontend:
• React + Kendo UI components (menus, dashboards, timeline visualizations)
• Vite, Tailwind, and custom CSS for styling
Backend:
• Node.js + Express
• Nuclia RAG model for AI-generated storylines
• JSON-based storage for player books
Features:
• Dynamic, AI-generated branching storylines
• Personalized books for each journey
• Replayable history experiences

💡 Why It’s Different
• Every playthrough is unique thanks to AI-powered story generation
• History becomes interactive and immersive, not static
• Players can learn, explore, and create their own alternate histories
• Fully polished and responsive UI thanks to Kendo React components

🔗 Links
• Live Demo: https://kendohack.onrender.com/
• Source Code: https://github.com/SriCharan-616/kendohack

🚀 Next Steps
• Add more eras and historical characters
• Enhance AI storytelling for richer, more diverse narratives
• Mobile-first interface and accessibility improvements
• Social features for sharing and exploring player-created histories

History is no longer just facts in a book—it’s something you can live, shape, and share.

Arquitetura Monolítica em Startups Contemporâneos

Com o avanço das tecnologias, o desenvolvimento de sistemas vem se tornando cada vez mais acessível e ágil. Ao mesmo tempo, cresceu a complexidade das estruturas necessárias para acompanhar as demandas e expectativas cada vez maiores do mercado. No entanto, surge a reflexão sobre como essa complexidade se comporta em sistemas de menor porte, especialmente no contexto de startups.

Diante disso, surge o questionamento: deve-se adotar uma arquitetura mais sofisticada apenas para acompanhar padrões tecnológicos impostos pela indústria, ou manter uma estrutura monolítica simples pode ser o caminho mais eficiente no estágio inicial de um negócio? Segundo Fowler (2015), a arquitetura monolítica ainda se mostra uma alternativa viável em projetos emergentes, pois oferece simplicidade, rapidez de implementação e menor custo de manutenção inicial.

A palavra monólito tem origem grega, que possui as palavras soltas monos (único) e lithos (pedra), e está associada a algo sólido e concreto, indivisível e formado por um único bloco. Esse significado foi utilizado no campo do desenvolvimento de software para se referir à arquitetura monolítica, um modelo tradicional em que todas as funções do sistema são centralizadas em uma única estrutura (WIKIPÉDIA, 2025a). Na prática, isso quer dizer que os diferentes módulos da aplicação são interligados e compilados em um só executável, funcionando de maneira compartilhada e única.

Esse tipo de arquitetura não foi desenhada e desenvolvida por uma única pessoa, pois surgiu de forma gradual, consolidando-se como o padrão inicial de construção de sistemas. Com isso, por muito tempo, o discurso predominante foi sobre a migração de sistemas monolíticos para microsserviços, ressaltando a implementação de modelos arquiteturais mais atuais e eficientes. Contudo, essa narrativa mudou quando se percebeu que a escolha da arquitetura deve estar alinhada ao modelo de negócio e ao contexto da empresa, e não apenas às "modas" das arquiteturas em alta (WIKIPÉDIA, 2025b).

O desenvolvimento de um sistema com arquitetura monolítica é considerado mais rápido devido à não necessidade de trabalhar em uma comunicação complexa entre os componentes, tendo em vista que todos estão alocados na mesma base. Outra característica marcante é o nível de facilidade de monitoramento e manutenção no que diz respeito à infraestrutura, pois, com apenas uma única aplicação para gerenciar, os times de operação não precisam lidar com a complexidade de múltiplos serviços. De forma geral, também há uma redução de custos com servidores, comunicação entre sistemas e controle de monitoramento.

Considerando as vantagens apresentadas, pode-se concluir que esse modelo de arquitetura é mais indicado para o desenvolvimento de sistemas menores e menos complexos, sendo uma ótima escolha em cenários de empresas em fase inicial, como startups. Nesses casos, a simplicidade do monolítico garante agilidade, reduz custos e permite que o time concentre seus esforços na evolução do produto sem precisar se preocupar, desde o início, com estruturas mais sofisticadas e complexas.

Apesar de sua facilidade de implementação, a longo prazo o modelo monolítico apresenta desvantagens quando comparado a outras arquiteturas de software. Como exemplo, a arquitetura de micro-serviços, de acordo com pesquisa realizada pela Amazon AWS (2025a), “fornece uma base de programação robusta para sua equipe e suporta sua capacidade de adicionar mais recursos de forma flexível”. Isso demonstra que, para startups de rápido crescimento — os chamados “unicórnios” — a segmentação do código torna-se inevitável para que a empresa consiga escalar com eficiência.

Ainda assim, “desmembrar o monolito” não é uma tarefa simples. Estudos de caso, como o da Netflix, evidenciam que a migração para micro serviços exige planejamento estratégico, altos investimentos e mudanças culturais profundas dentro da organização (AMAZON AWS, 2025b). Inicialmente, a empresa enfrentou graves problemas de indisponibilidade e limitações de escalabilidade em sua arquitetura monolítica, o que impulsionou a decisão pela reestruturação. Contudo, a transição não ocorreu de forma imediata: foi um processo gradual, que envolve a decomposição de serviços, reconfiguração de pipelines de entrega contínua e adoção massiva de práticas de automação. Esse movimento, além de resolver gargalos técnicos, permitiu à Netflix atender milhões de usuários simultâneos ao redor do mundo com maior estabilidade e agilidade.

De forma semelhante, outras grandes empresas como Amazon, Uber e Spotify também documentaram desafios durante a jornada de modernização de suas arquiteturas, revelando que a adoção de micro serviços não é apenas uma decisão tecnológica, mas também organizacional (LIMMA, 2019; DREAMFACTORY, 2025). Isso reforça a ideia de que, embora complexa, a mudança tende a ser inevitável para negócios que buscam escalabilidade em ambientes digitais altamente competitivos.

A arquitetura monolítica se mostra uma escolha estratégica para startups em fase inicial, pois proporciona simplicidade, rapidez no desenvolvimento e menor custo de manutenção. Essa abordagem permite validar o produto e ajustar-se ao mercado sem a necessidade de estruturas complexas, possibilitando que os recursos da equipe sejam direcionados ao crescimento do negócio.

Entretanto, com a expansão da base de usuários e a diversificação de funcionalidades, o monólito tende a se tornar um entrave à escalabilidade, dificultando a manutenção e a adoção de novas tecnologias. Casos como os da Netflix, Amazon e Uber demonstram que a transição para micro-serviços, embora desafiadora, é frequentemente inevitável em cenários de alto crescimento. Assim, o modelo monolítico deve ser entendido não como ultrapassado, mas como um ponto de partida pragmático, que precisa ser acompanhado de um planejamento para futuras evoluções arquiteturais.

IA Generativa en 2025: Cómo sobrevivir y prosperar en la nueva era tecnológica

La inteligencia artificial generativa (IA generativa) dejó de ser un concepto de ciencia ficción.

Hoy escribe textos, crea ilustraciones, compone música e incluso programa código en segundos. Lo que antes parecía magia, ahora está en la oficina, la universidad y hasta en el celular.

En este artículo te contaré qué es, cómo funciona, qué oportunidades trae y cuáles son los riesgos que debemos considerar para aprovecharla de forma responsable.

💡 Este artículo está pensado tanto para quienes programan como para quienes coordinan, diseñan o simplemente sienten curiosidad por la tecnología.

  1. ¿Qué es la IA generativa?

Es una rama de la IA que no solo analiza datos, sino que genera contenido nuevo.
Solo basta con escribir una instrucción (un prompt) para obtener un artículo, una imagen o un bloque de código.

👉 En 2025 ya no hablamos de un experimento, sino de una herramienta de trabajo y creatividad en múltiples industrias.

  1. Cómo funciona: la magia detrás del contenido

Los modelos se entrenan con millones de ejemplos y aprenden patrones. Gracias a un mecanismo llamado “atención” (attention), el sistema elige la información más relevante para dar una respuesta coherente.

Ejemplo sencillo: si escribes “hazme una lista de tareas para organizar un evento”, el modelo no copia de internet, sino que predice las palabras más probables y crea un plan nuevo.

  1. Modalidades y multimodalidad

Hasta hace poco teníamos modelos especializados (texto → texto, texto → imagen). La tendencia actual son los modelos multimodales: entienden y combinan texto, imágenes, audio y video en la misma interacción.

🎬 Imagina describir una idea en palabras y recibir un video corto como resultado.

  1. Limitaciones y riesgos

Aunque suene increíble, la IA generativa tiene problemas:

Alucinaciones: puede inventar datos que parecen ciertos.

Conocimiento limitado: no sabe lo ocurrido después de su fecha de entrenamiento.

Sesgos: repite prejuicios presentes en los datos.

Costos ambientales: entrenar modelos consume mucha energía.

Mal uso: desde noticias falsas hasta fraudes visuales.

💡 Consejo: úsala como herramienta, no como única fuente de verdad.

  1. El rol humano en la era de la IA

La IA no viene a reemplazarnos, sino a exigir que potenciemos lo que nos hace únicos: creatividad, pensamiento crítico, ética y empatía.

Los profesionales que prosperen serán los que aprendan a trabajar con la IA, no contra ella.

  1. Aplicaciones prácticas

Algunos ejemplos actuales:

🎓 Educación: asistentes que resumen lecturas o crean ejercicios personalizados.

💼 Negocios: borradores de contratos, reportes o ideas de marketing.

🎨 Arte y diseño: generación de imágenes, música y videos.

👩‍💻 Tecnología: ayuda en la programación y prototipos rápidos.

  1. Agentes autónomos: el siguiente paso

Más allá de “responder preguntas”, están surgiendo los agentes autónomos: programas que encadenan tareas, interactúan con aplicaciones y toman decisiones limitadas sin supervisión constante.
Ejemplo: un asistente que recibe un correo, analiza el pedido, busca información en internet y genera una respuesta automática.

  1. IA en las organizaciones

El verdadero cambio ocurre cuando una empresa integra la IA en sus procesos y cultura, no solo en tareas individuales. Esto requiere:

Liderazgo con visión.

Capacitación constante.

Políticas éticas claras.

Espacios de experimentación.

  1. El arte de preguntar: prompt engineering

El resultado depende de la instrucción. Aprender a dar contexto, ejemplos y objetivos claros es fundamental.
Ejemplo: en vez de escribir “haz un informe”, prueba con:

“Escribe un informe de una página, con viñetas y un resumen final para directivos no técnicos”.

  1. Mirando al futuro

El debate está abierto: ¿liberará nuestro potencial creativo o generará nuevas desigualdades?
Lo único seguro es que la IA generativa ya forma parte de nuestra vida digital y seguirá transformando la forma en que trabajamos y nos relacionamos.

Conclusión

La IA generativa no es una moda: es una revolución en marcha. Nuestro reto no es temerle, sino aprender a usarla con criterio, ética y creatividad.

🔑 Adaptarse es obligatorio, prosperar es la gran oportunidad.

How I Built a Web Vulnerabilty Scanner - OpenEye

Creating secure web applications is no easy feat. Vulnerabilities like SQL injection, XSS, or CSRF are still among the most common attack vectors, yet not every developer has the time or the skills to run deep security scans.

So I decided to solve this problem by building OpenEye; a modern, cloud-hosted, and user-friendly web vulnerability scanner that leverages OWASP ZAP under the hood but wraps it with a clean Django-based interface, making vulnerability scanning accessible to both non-technical users and professionals who just want clear, concise output.

This blog covers the concept, architecture, implementation, security considerations, and deployment of the project.

The idea was simple. What if anyone not just security experts could run a reliable vulnerability scan against their own websites, and instantly see a structured report highlighting risks by severity?

With OpenEye, users log in, enter a target URL, and initiate a scan. The system then spins up a dedicated ZAP container, performs both active and passive scanning, and outputs results grouped by severity levels — critical, high, medium, and low.

Users can also revisit past scans through their personal history panel, ensuring that important findings aren't lost in a sea of logs.

The frontend is built with Tailwind CSS + JavaScript and Django templates. It's simple and intuitive even a grade 3 student wouldn't get lost.

The goal was to reduce friction so whether you're a developer or someone without a technical background, you can quickly run a scan and make sense of the findings.

Initially, I considered building my own scanning engine but that would be reinventing the wheel. OWASP ZAP is an industry-standard DAST (Dynamic Application Security Testing) tool, and it already:

  • Detects SQLi, XSS, CSRF, authentication/session flaws, and misconfigurations
  • Provides a JSON API for integration
  • Has a robust active and passive scanning mechanism

By embedding ZAP inside Docker containers, each scan runs in isolation, preventing cross-contamination and ensuring resource efficiency.

Here's how the system fits together:

The app has a frontend and backend built with Django, PostgreSQL via Supabase for the database to store scan history, authentication with AWS Cognito, and our core functionality OWASP ZAP running in Docker containers. After testing locally to enable users to easily access the app, I hosted it on an AWS EC2 instance.

High level view of OpenEye

I've said a lot, deep breathes. Now let's take it slowly and peel out the implementation step by step.

Django Project Setup

I'll assume you already know how to set up a Django project and run it. If not, no worries at all just do a quick Google search and you'd find a ton of resources. Using Django isn't compulsory; use any framework that works for you. For me, I wanted to learn a little more Django, hence why.

If you'd be working totally locally, then you could just spin up a PostgreSQL database locally. Otherwise, use a cloud-hosted database. Supabase was my ideal choice because why not, it's free and easy to use. Create the necessary tables and fields to store your scan information. It's all up to you; you're free to store whatever you like.

OWASP ZAP Integration

One of the best parts about OWASP ZAP is that it exposes a REST API out of the box. That means instead of manually interacting with the ZAP desktop client, you can programmatically control scans from your own application. This was perfect for me because I wanted OpenEye to feel like a standalone platform.

At a high level, here's what I needed to do:

  1. Trigger a spider scan – to crawl the target application and discover URLs/endpoints
  2. Run an active scan – to test those discovered endpoints for vulnerabilities
  3. Fetch alerts – to retrieve all the issues ZAP found so I could parse, rank, and display them in OpenEye's dashboard

ZAP's REST API makes these tasks surprisingly straightforward. For example, here's a simplified version of the wrapper functions I built:

def start_spider_scan(self, target_url: str) -> str:
    return self._make_request(
        "/JSON/spider/action/scan/",
        {'url': target_url}
    ).get('scan', '')

def get_alerts(self, target_url: str) -> Dict[str, Any]:
    return self._make_request(
        "/JSON/core/view/alerts/",
        {'baseurl': target_url}
    )

All I'm really doing here is sending HTTP requests to ZAP's REST API endpoints:

  • /JSON/spider/action/scan/ → tells ZAP to start crawling a target
  • /JSON/core/view/alerts/ → retrieves all alerts (vulnerabilities, misconfigurations, etc.) for that target

The _make_request() helper I wrote under the hood is just an HTTP client method that talks to ZAP running inside its Docker container. So instead of re-inventing the wheel and writing my own vulnerability scanner from scratch, I leverage ZAP's proven scanning logic but wrap it in my own backend API layer.

This approach gave me two big advantages:

  1. Abstraction & Control – My backend only needs to call simple Python functions like start_spider_scan() or get_alerts(). I don't have to expose raw ZAP API calls directly to the frontend.

  2. Custom Processing – Once I had the JSON responses, I could parse them, group issues by severity, and feed them into my own database and dashboard instead of relying on ZAP's default reporting.

So when a user runs a scan in OpenEye, they're really triggering these wrappers in my backend, which in turn communicate with ZAP's REST API inside Docker. Also, by default, ZAP listens on port 8080 inside the container.

For example, if I start ZAP in Docker like this:

docker run -u zap -p 8080:8080 -i ghcr.io/zaproxy/zaproxy:stable zap.sh -daemon -host 0.0.0.0 -port 8080

Where:

  • -daemon runs ZAP headlessly (no GUI)
  • -host 0.0.0.0 makes it listen on all interfaces
  • -port 8080 exposes the API at http://localhost:8080
  • -p 8080:8080 maps the container port to the host, so my backend can reach it

Once ZAP is running, the API is always accessible at endpoints like:

  • http://localhost:8080/JSON/spider/action/scan/
  • http://localhost:8080/JSON/core/view/alerts/

Managing authentication securely is one of those things that looks simple on the surface "just add login/signup" but in reality it's full of pitfalls: password storage,token lifetimes, OAuth2 flows, etc. And I had a bit of a headache with setting the auth callback (side-eye to AWS for this).

Either way, Cognito gave me:

  • Secure defaults — password policies, account recovery, MFA
  • OAuth2.0 compliance — standard flows (authorization code, implicit, etc.)
  • Scalability — I don't need to worry about user pools or scaling login endpoints

Here's what happens when a user logs in to OpenEye:

1. Redirect to Cognito

When the user clicks "Login," they're redirected to my Cognito hosted login page. Cognito provides a default UI, which saved me time.

2. Authorization Code Returned

After the user enters their credentials, Cognito redirects them back to my Django app with an authorization code in the URL query string.

3. Backend Exchanges Code for Tokens

My Django app then takes that code and makes a server-to-server POST request to Cognito's /oauth2/token endpoint. This returns:

  • An ID Token basically a JWT containing user identity claims like email, sub, etc.
  • An Access Token and refresh token

4. User Session Established

Django decodes the ID token, extracts the user info, and establishes a session. From the app's perspective, the user is now authenticated.

One thing Cognito enforces for good reasons is that OAuth2 redirects must happen over HTTPS (except for localhost during development). This meant when I hosted my app on an EC2 instance, I couldn't just serve HTTP to Cognito,I had to set up SSL.

First, I created a free subdomain for my app using FreeDNS(you could check them out) after I set it up on an AWS EC2 instance, then ran this command:

sudo certbot --nginx -d openeye.chickenkiller.com

Certbot automatically provisioned and installed free SSL certificates. Nginx handled HTTPS termination and forwarded requests to Django running on Gunicorn.

Then finally, the entire OAuth2 flow worked securely end-to-end...
Relax, continue reading I'll elaborate more on this.

This is not entirely synchronous because I had set up the app on the EC2 instance before creating a subdomain name and registering it with SSL, which was a series of back and forth. But to make it easier for you to follow, I mentioned this in my earlier step with setting up the auth, so please don't get confused.

Once I had the core pieces (ZAP API, Django backend, Cognito authentication), I needed a place to run everything in the cloud. For this, I chose an AWS EC2 t2.micro instance (Ubuntu 22.04) small, cheap, and well within the AWS Free Tier.

The very first thing I did after launching the instance was update packages and install the essentials; Python, Docker, Nginx, etc.

At this point, I had a clean Ubuntu server with the tools needed to run both my app and ZAP.

Django doesn't serve production traffic directly, so I used Gunicorn as the WSGI HTTP server. Gunicorn is lightweight and designed specifically for running Python web apps in production. It basically handles the load and spins up more instances of my app when necessary.

Then I put Nginx in front of Gunicorn for two reasons:

  1. Static files: Nginx serves static assets (CSS, JS, images) much faster than Gunicorn
  2. Reverse proxy: Nginx terminates HTTPS and forwards requests to Gunicorn on port 8000

So when clients hit https://openeye.chickenkiller.com:

  • Nginx terminates SSL, then proxies the request to Gunicorn running Django on port 8000
  • Static files are served directly by Nginx for speed

For the scanning engine, I didn't want to install ZAP directly on the host. Running it in Docker gave me isolation, portability, and easy lifecycle management (start, stop, update). So I just downloaded the image and ran it detached (-d means it stays up as a background service).

Now, my Django backend can talk to the ZAP API via http://localhost:8080.

So that's it! Not so intimidating as you thought, huh? Oh yeah, and why the name OpenEye, you ask? Well, why not ,don't you think it's the perfect name for a vulnerability scanner? OpenEye - it's literally a wide-opened eye for finding vulnerabilities(whatever that means). So that's it feel free to replicate this and add your own spice!

Tangible India - A journey through numbers

This is a submission for the KendoReact Free Components Challenge.

I built Tangible India – A journey through numbers, an interactive web app that blends facts about India with numbers in a fun and engaging way.

The idea is simple yet impactful:

  • Numbers (0, 1, 2, 3, 1947, …) are the foundation of computer science and daily life.
  • For each number, the app presents a fascinating fact related to India — ranging from history, culture, geography, science, to modern-day achievements.
  • As an additional category, memes have been added as categories since a lot of the knowledge movement and news is propagated through such measures.

This turns abstract digits into tangible insights, helping users learn something new about India while exploring numbers sequentially or randomly.

It’s a learning + curiosity app designed for students, educators, trivia lovers, and anyone interested in India.

🔗 Live Project: https://himanshuc3.github.io/tangible-india/

💻 Source Code: Tangible India - repository

Screenshots:

Title/Remarks Screenshot
Homepage (light theme) Light theme - homepage
Homepage (dark theme) Dark theme - homepage
Icons & tabbed navigation Tabbed Navigation

I leveraged KendoReact's components as atomic components for creating the layout for Tangible India from scratch. The usage was restricted to the following components due to a lack of

  • Button – Used across the whole website for actions of triggering search, random fact generation etc.
  • SVG Icons (plusOutlineIcon, minusOutlineIcon, etc.) – The icon set offers a wide selection of custom icons along with the flexibilty to modify style according to the theme. Used in conjunction with buttons for accessibility and improving UX
  • Card — This acts as a fundamental building block or a Poster in layman terms, for showing the Facts linked to each number.
  • Input – Used for getting user input for searching/filtering through available list of facts.
  • Default Theme – Used for importing the base theme used as design system and styles for each of the imported components.
  • TabStrip, TabStripTab - Useful for displaying multiple facts across linked to the same number.
  • AppBar, AppBarSection, AppBarSpacer – The header used in the website is using this component as a building block for layout.
  • Popover – Showing keyboard shortcuts via popover
  • Slider, SliderLabel – The slider is a critical component giving a bird's eye view of the progression of number based facts.
  • Tooltip – Used in defining unknown behavior, like "Want to contribute?" button in header
  • ChipList, Chip - Categories displayed like "historical", "cultural" etc. are leveraging Chips and Chiplist

I used an AI assistant to:

  • Suggest initial draft UI layouts to help avoid referencing API and start from 20-30% setup instead of 0%.
    • Generating the header using AppBar.
    • The code for generating Slider for showing number progression.
  • Exploration for best component fit for the UI and UX required.
    • Describing the UX for: functionality to select among different categories and getting relevant components present under the free tier of KendoReact and going ahead with ChipList.

Challenges Faced using the AI Assistant

While KendoReact AI Assistant (MCP server) does help in a couple of scenarios described above, it still feels in a nascent stage with retaining and differentiating between contexts. As an example, consider the following progression of events:

  1. Creating Header using AppBar component with relevant context for the features and how the UX should look like.
  2. Improving a Search button component in Filter Component with an appropriate icon.
  3. Add a tabbed navigation to facts card for multiple facts linked to the same number. MCP hallucination — While all of these prompts are given as different chats and make modifications to different components, the code suggested by the Coding Assistant merges all of them to outputs a single component which is an aggregation of all the previous request outputs + the most recent code suggestion.

Components like Animation don't work very intuitively and infact, I skipped using it since the API is consuming and using it out of the box with the help of examples wasn't working.

✨ With Tangible India, I wanted to show how numbers can tell stories — and how React + KendoReact makes it easy to turn that vision into reality.

Talos Kubernetes in Five Minutes

Original post: https://nabeel.dev/2025/09/28/talos-in-five

Talos Linux is an OS designed specifically for running Kubernetes.
It is locked down with no SSH access. All operations are done through a secured API.
The documentation is (understandably) catered to setting up multi-node Kubernetes clusters that are resilient to failure.
But what if you want the cheapest possible Kubernetes cluster, for testing purposes for example, where reliability isn't super important?

In this article I'll show you how to set up a simple single-node Talos cluster in less than five minutes.
By following these instructions, you can have a full Kubernetes cluster running on a single VM,
without the extra costs of control planes and load balancers that cloud providers normally add onto their Kubernetes services.

The basic outline of steps to create a single-node cluster is:

  • Get a Talos ISO image
  • Create a blank Talos VM instance
  • Update your config to allow workloads on control plane nodes
  • Initialize the Talos VM and bootstrap the cluster
  • Install MetalLB
  • Install Envoy Gateway

Okay, this is where I cheat a little. I'm not counting the time it takes to download and upload a Talos VM image as part of the 5 minutes.
This step depends on which cloud provider (or home lab setup) you have.
The good news is that the official documentation is quite good.
Find the section that matches your setup and follow those instructions.

Essentially, you are going to be downloading a Talos Linux ISO.
If you are using a cloud provider (Azure, AWS, OCI, DigitalOcean, etc.),
you will then need to upload that image so that VMs can be created from that image.
I have done this on DigitalOcean and Oracle Cloud. It takes a bit of time, maybe 10-15 minutes,
but it's not hard and you only need to do it once to create as many VMs as you like going forward.

Next you will need to create a Talos Linux VM (or server if you're installing on bare metal).
As with the previous section, you will need to follow the instructions based on the infrastructure you are using.
I've been most recently using DigitalOcean and automating everything with PowerShell.
For me, creating a new blank Talos VM looks like this:

doctl compute droplet create --region sfo3 --image $talosImageId --size s-2vcpu-4gb --enable-private-networking --ssh-keys $sshKeyId $vmName --wait

After creating your blank VM, DO NOT follow any other instructions from the documentation!
Specifically, do not execute any of the talosctl commands described there.
This is where we will diverge from the official documentation.

Once your VM or machine is created, make note of its IP address for the following steps.

Now we are going to initialize our Talos Kubernetes cluster.
Do this with the following commands:

talosctl gen config $vmName "https://${VM_IP}:6443" --additional-sans $VM_IP -o $CONFIG_DIR
export TALOSCONFIG="$CONFIG_DIR/talosconfig"
talosctl config endpoint $VM_IP
talosctl config node $VM_IP

This will create a directory and populate it with an auto-generated cert and some default configuration files.
Note the following:

  • --additional-sans ensures that the certificate is valid for the VM's public IP address
  • Set the TALOSCONFIG environment variable so you don't have to add --talosconfig mydir/talosconfig every time you use talosctl

Talos normally configures separate control plane and worker nodes.
This is good practice for production clusters, but is expensive when you just want to test or kick the tires.
Instead, we want to create a single control plane VM that will also be our worker.
To do this, edit controlplane.yaml in the Talos config directory.
Scroll to the end of the file and uncomment (remove the #) the line # allowSchedulingOnControlPlanes: true.

Now we are ready to initialize the VM. By default, a freshly created VM waits for someone to configure it.
Once you run this command, the VM is locked down to only work with the certificate that you generated with the talosctl gen config command.
Technically, there's a risk that someone could randomly beat you to configuring the VM and take ownership.
The likelihood of this happening is very low, but if it did, you would see a failure in the apply-config command,
and you would simply delete the VM.
There are more secure ways to do this, specifically generating an ISO that is preconfigured to only respond to your cert.
However, that is beyond the scope of this simple tutorial.

talosctl apply-config --insecure --nodes $VM_IP --file "$CONFIG_DIR/controlplane.yaml"

Give the VM a few seconds (I wait 10) to apply the configuration, then run talosctl bootstrap.
You can then run talosctl health or talosctl dashboard to watch the cluster come alive in real-time.

At this point, your Kubernetes cluster is alive and you just need to generate the kubeconfig to use it:

talosctl kubeconfig $CONFIG_DIR
export KUBECONFIG=$CONFIG_DIR/kubeconfig

You should now be able to run commands like kubectl get pods --all-namespaces or k9s.

Have you ever created a LoadBalancer type service in AKS, EKS, etc., to create a load balancer that routes traffic to your cluster?
MetalLB will give that same functionality, but for free on your bare VM.
You can install MetalLB as the documentation prescribes with:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
kubectl wait --timeout=5m --for=condition=available --all deployments -n metallb-system

After that, we need to configure an IPAddressPool so MetalLB is aware of the IP address we want it to use:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: lab-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.100/32  # Replace with your VM's public IP

Replace 192.168.1.100 with the public IP address of your VM, save the YAML to metallb-ipaddresspool.yaml and then run kubectl apply -f metallb-ipaddresspool.yaml.
Congratulations, you now have MetalLB installed and ready to work with your Gateway Controller.

Finally, you will probably want to use the Kubernetes Gateway API
to route traffic through the public IP address to services running in your cluster.
I found that Envoy Gateway was the easiest solution to achieve this.
The quick start documentation worked flawlessly, but in summary:

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.5.1 -n envoy-gateway-system --create-namespace
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available

You can test that everything works as it should with the following:

kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/v1.5.1/quickstart.yaml -n default
curl --verbose --header "Host: www.example.com" http://$VM_IP/get

And there you have it! A simple single-node Kubernetes cluster in less than the time it took to read this article.
You can create as many as you like, tear them down, and create more when you need them.
I ended up automating all of this in a PowerShell script, and the time to run is 3-4 minutes.
This script likely won't work right out of the box for you, but it should be fairly easy to adapt it if you like:

echo "Getting VM parameters..."
$sshKey = (doctl compute ssh-key list -o json | ConvertFrom-Json | where {$_.name.Contains('dummy')}).id
$imageId = (doctl compute image list -o json | ConvertFrom-Json | where {$_.name.Contains('Talos')}).id
$timestamp = Get-Date -Format "yyyyMMdd-HHmmss"
$vmName = "kcert-test-$timestamp"
mkdir $vmName | Out-Null
echo "Creating droplet..."
$vmJson = (doctl compute droplet create --region sfo3 --image $imageId --size s-2vcpu-4gb --enable-private-networking --ssh-keys $sshKey $vmName --wait -o json)
$vm = $vmJson | ConvertFrom-Json
$vmIp = $vm[0].networks.v4 | where {$_.type -eq 'public'} | Select-Object -ExpandProperty ip_address
echo "VM created with IP address: $vmIp"
echo $vmIp > $vmName/ip.txt
echo "Initializing Talos cluster at $vmIp"
talosctl gen config $vmName "https://${vmIp}:6443" --additional-sans $vmIp -o $vmName
$env:TALOSCONFIG = (Resolve-Path "$vmName/talosconfig").Path
talosctl config endpoint $vmIp
talosctl config node $vmIp
$yaml = Get-Content -Path "${vmName}/controlplane.yaml"
$yaml = $yaml -replace '# allowSchedulingOnControlPlanes:', 'allowSchedulingOnControlPlanes:'
Set-Content -Path "${vmName}/controlplane.yaml" -Value $yaml
talosctl apply-config --insecure --nodes $vmIp --file "${vmName}/controlplane.yaml"
echo "Sleeping for 10 seconds to allow the node to initialize..."
Start-Sleep -Seconds 10
talosctl bootstrap
echo "Sleeping for 10 seconds to allow the cluster to stabilize..."
Start-Sleep -Seconds 10
talosctl health
talosctl kubeconfig $vmName
$env:KUBECONFIG = (Resolve-Path "$vmName/kubeconfig").Path
echo "Setting up MetalLB"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
kubectl wait --timeout=5m --for=condition=available --all deployments -n metallb-system
@"
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: lab-pool
  namespace: metallb-system
spec:
  addresses:
  - $vmIp/32
"@ | kubectl apply -f -
echo "Setting up Envoy"
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.5.1 -n envoy-gateway-system --create-namespace
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
echo "Here are your environment variables:"
$envVars = @(
    "`$env:KUBECONFIG = '$env:KUBECONFIG'",
    "`$env:TALOSCONFIG = '$env:TALOSCONFIG'",
    "`$env:VMIP = '$vmIp'"
)
$envVars | ForEach-Object { echo $_ }
$envVars | Out-File -FilePath "$vmName/env.txt" -Encoding utf8

Arquitetura Monolítica

1 INTRODUÇÃO

A arquitetura monolítica é um dos modelos mais antigos e utilizados no desenvolvimento de software. Durante muito tempo, praticamente todos os sistemas foram criados seguindo esse padrão, principalmente pela limitação de tecnologias distribuídas e pela praticidade de reunir tudo em um único programa. Nesse modelo, toda a aplicação, interface, lógica de negócio e banco de dados é organizada em uma só unidade de execução(projeto). Isso facilita muito no começo, porque simplifica a implantação, os testes e até o entendimento do sistema. No entanto, quando o software cresce, começam a aparecer alguns problemas, principalmente relacionados à manutenção, escalabilidade e resiliência (FOWLER, 2015).

2 ARQUITETURA MONOLÍTICA

De acordo com Richards e Ford (2020), a arquitetura monolítica possui como principal característica o acoplamento entre seus módulos. Isso significa que as partes do sistema não funcionam de forma totalmente independente, mas sim como engrenagens de uma mesma máquina. Essa forma de construção torna a comunicação interna mais rápida e simples, mas, ao mesmo tempo, cria dificuldades quando surge a necessidade de mudar apenas uma parte sem impactar as demais. Martin Fowler (2015) reforça que, mesmo com limitações, muitas vezes faz sentido começar com um monólito, pois arquiteturas distribuídas como microserviços podem trazer um peso desnecessário em estágios iniciais do projeto.

Um exemplo de defesa dessa abordagem é dado por David Heinemeier Hansson (2016), criador do Ruby on Rails. Ele utiliza o termo 'Majestic Monolith' para se referir a sistemas grandes, mas organizados, que continuam trazendo resultados sem a necessidade de divisão em dezenas de serviços menores. Essa visão mostra que o monólito não precisa ser sinônimo de bagunça, mas pode ser uma escolha estratégica quando bem projetado.

3 VANTAGENS DO MODELO MONOLÍTICO

Na prática, grandes empresas mantêm sistemas monolíticos funcionando em escala global. O GitHub, por exemplo, ainda utiliza um monólito em Ruby on Rails, com milhões de linhas de código e milhares de desenvolvedores trabalhando ao mesmo tempo, além de realizar implantações diárias (GITHUB ENGINEERING, 2024). Outro exemplo é o Shopify, que segue a linha do 'monólito modular', em que a aplicação continua sendo um único sistema, mas organizado em componentes menores com regras claras de dependência (SHOPIFY ENGINEERING, 2020). Já o Basecamp defende abertamente manter sua aplicação como um monólito para reduzir custos de complexidade e manter a produtividade da equipe (HANSSON, 2016).

Além de exemplos globais, vale destacar que muitos sistemas menores, como ERPs de pequenas empresas, softwares para academias(Next Fit Sistemas) e escolas, e sistemas de controle interno em organizações, também seguem a arquitetura monolítica. Isso acontece porque a simplicidade desse modelo atende bem às necessidades iniciais e mantém os custos baixos, principalmente em contextos onde não há equipes grandes de tecnologia ou exigências de escalabilidade massiva.

4 LIMITAÇÕES E DESAFIOS

Por outro lado, as desvantagens ficam evidentes à medida que o sistema cresce. Um dos maiores problemas é a resiliência: como todos os módulos estão no mesmo processo, uma falha em um ponto pode derrubar o sistema inteiro (IBM, 2024). Outra limitação é a implantação: mesmo para pequenas mudanças, é necessário recompilar e redistribuir toda a aplicação, o que aumenta o risco de falhas em produção (AWS, 2023). Também existem dificuldades de escalabilidade, pois muitas vezes o sistema precisa ser replicado por inteiro, aumentando os custos de infraestrutura (VERNON; JASKUŁA, 2021).

Segundo Bucchiarone et al. (2018), várias empresas que começaram com monólitos acabam migrando para microserviços quando o crescimento exige mais independência entre equipes e partes do sistema. Essa migração, porém, não é simples: envolve reestruturação do código, reorganização da equipe e novas práticas de operação. Por isso, muitos especialistas recomendam um caminho intermediário, que é o 'monólito modular', para reduzir os riscos dessa transição e manter a evolução mais controlada.

5 EXEMPLOS PRÁTICOS E APLICAÇÕES

Casos como GitHub, Shopify e Basecamp são os mais conhecidos, mas não são os únicos. Muitas startups começam suas operações com sistemas monolíticos pela facilidade de desenvolvimento e pela necessidade de colocar o produto no mercado rapidamente. Um exemplo claro é a Next Fit Sistema, onde cerca de 70% de sua arquitetura é feita em monólito. O que hoje está em microsserviços são detalhes, como o sistema de notificações. Além disso, em setores como educação, saúde e pequenas indústrias, ainda é muito comum encontrar sistemas de gestão funcionando como monólitos. Isso mostra que essa arquitetura não está restrita apenas ao passado, mas segue atual em várias áreas.

6 CONCLUSÃO

A arquitetura monolítica foi, por muito tempo, o padrão natural de desenvolvimento de sistemas e, mesmo com o avanço das arquiteturas distribuídas, ainda permanece como uma opção válida em muitos contextos. Sua principal força está na simplicidade: é mais fácil de começar, mais barato de manter no início e mais simples de implantar. Por isso, ainda é muito usada tanto em grandes empresas, como o GitHub e o Shopify, quanto em pequenos sistemas corporativos e educacionais.

No entanto, os problemas de resiliência, escalabilidade e manutenção a longo prazo não podem ser ignorados. Quando a aplicação cresce muito, o monólito pode se tornar um gargalo e exigir mudanças de arquitetura. O ponto central é que o monólito não deve ser visto como algo ultrapassado ou errado, mas como uma escolha que precisa ser adequada ao tamanho do projeto e aos objetivos da organização. Em resumo, é uma solução que pode ser tanto estratégica quanto limitadora, dependendo do contexto em que é aplicada.

7 REFERÊNCIAS

AWS. Monolithic vs. Microservices: Key Differences. AWS, 2023. Disponível em: https://aws.amazon.com.
BUCCHIARONE, A. et al. From Monolithic to Microservices: An Experience Report from the Banking Domain. IEEE Software, v. 35, n. 3, 2018.
FOWLER, M. Monolith First. MartinFowler.com, 2015. Disponível em: https://martinfowler.com/bliki/MonolithFirst.html.
GITHUB ENGINEERING. Building GitHub with Ruby and Rails. GitHub Blog, 2024.
HANSSON, D. H. The Majestic Monolith. Basecamp Blog, 2016.
IBM. Monolithic vs. Microservices. IBM Cloud, 2024. Disponível em: https://www.ibm.com.
RICHARDS, M.; FORD, N. Fundamentals of Software Architecture. O’Reilly Media, 2020.
SHOPIFY ENGINEERING. Under Deconstruction: The State of Shopify’s Monolith. Shopify Blog, 2020.
VERNON, V.; JASKUŁA, T. Strategic Monoliths and Microservices. Addison-Wesley, 2021.


Snapchat introduces a paid storage option for all the Memories hoarders out there

Snap is imposing a new storage limit on Snapchat's Memories feature, which has racked up impressive numbers since its introduction in 2016. According to Snap, users have saved more than one trillion Memories across its platform, and it's now introducing "Memories Storage Plans" for users who exceed 5GB of Memories.

In a press release, Snap detailed that the introductory storage plan allows up to 100GB of storage for Memories for $1.99 a month. Snapchat+ subscribers, who pay $3.99 a month, will get up to 250GB of storage, while Snapchat's highest-tier Platinum subscribers will get 5TB included with their $15.99 monthly cost.

Snap said that a "vast majority" of its Snapchat users won't notice any changes since they're far from hitting the 5GB limit. For users who hold onto thousands of Snaps, the company is now rolling out these storage plans. To ease the transition from unlimited storage to paid options, Snap will give anyone exceeding 5GB of Memories a year of temporary storage. These new storage subscriptions follow Snap's latest paid option for its Lens+ subscription, which costs $9 a month.

This article originally appeared on Engadget at https://www.engadget.com/social-media/snapchat-introduces-a-paid-storage-option-for-all-the-memories-hoarders-out-there-203013294.html?src=rss

Meta has introduced revised guardrails for its AI chatbots to prevent inappropriate conversations with children

Business Insider has obtained the guidelines that Meta contractors are reportedly now using to train its AI chatbots, showing how it's attempting to more effectively address potential child sexual exploitation and prevent kids from engaging in age-inappropriate conversations. The company said in August that it was updating the guardrails for its AIs after Reuters reported that its policies allowed the chatbots to "engage a child in conversations that are romantic or sensual," which Meta said at the time was "erroneous and inconsistent" with its policies and removed that language. 

The document, which Business Insider has shared an excerpt from, outlines what kinds of content are "acceptable" and "unacceptable" for its AI chatbots. It explicitly bars content that "enables, encourages, or endorses" child sexual abuse, romantic roleplay if the user is a minor or if the AI is asked to roleplay as a minor, advice about potentially romantic or intimate physical contact if the user is a minor, and more. The chatbots can discuss topics such as abuse, but cannot engage in conversations that could enable or encourage it. 

The company's AI chatbots have been the subject of numerous reports in recent months that have raised concerns about their potential harms to children. The FTC in August launched a formal inquiry into companion AI chatbots not just from Meta, but other companies as well, including Alphabet, Snap, OpenAI and X.AI.


This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-has-introduced-revised-guardrails-for-its-ai-chatbots-to-prevent-inappropriate-conversations-with-children-200444230.html?src=rss

Pick up this battery-powered Ring doorbell while it's down to $80 ahead of Prime Day

If you've been considering a video doorbell for your front door, Prime Day deals may have just what you're looking for at a good price. A great deal already available is on the latest Ring Battery Doorbell Plus, which is 47 percent off and down to only $80. 

The Battery Doorbell Plus offers a 150-by-150-degree "head to toe" field of vision and 1536p high-resolution video. This makes it a lot easier to see boxes dropped off at your front door since it doesn't cut off the bottom of the image like a lot of video doorbells.

This model features motion detection, privacy zones, color night vision and Live View with two-way talk, among other features. Installation is a breeze since you don't have to hardwire it to your existing doorbell wiring. Most users report that the battery lasts between several weeks and several months depending on how users set up the video doorbell, with power-heavy features like motion detection consuming more battery life.

With most video doorbells today, you need a subscription to get the most out of them, and Ring is no exception. Features like package alerts require a Ring Home plan, with tiers ranging from Basic for $5 per month to Premium for $20 per month. You'll also need a plan to store your video event history.

Ring was acquired by Amazon in 2018, and now offers a full suite of home security products including outdoor cameras, home alarm systems and more. This deal is part of a larger sale on Ring and Blink devices leading up to Prime Day.

This article originally appeared on Engadget at https://www.engadget.com/deals/pick-up-this-battery-powered-ring-doorbell-while-its-down-to-80-ahead-of-prime-day-154508825.html?src=rss

Martin Shkreli has to face claims of copying one-of-a-kind Wu-Tang Clan album

Martin Shkreli, better known as Pharma Bro for his price-gouging antics with AIDS medication Daraprim, is going to have to defend against claims of misappropriating trade secrets with the unique Wu-Tang Clan album, Once Upon a Time in Shaolin. Earlier this week, US District Court Judge Pamela Chen wrote in a decision that Shkreli has to face a lawsuit that accuses him of improperly saving copies and playing the one-of-a-kind album for followers, which reduced its value and exclusivity.

The lawsuit was filed by PleasrDAO — which, according to its own website, is a collective of people involved with cryptocurrency, NFTs and digital art. Once Upon a Time in Shaolin has a strange ownership history, starting with Shkreli purchasing the one-of-one studio album in 2015 for $2 million. After a fraud conviction, Shkreli had to forfeit his assets, including the album, leading to PleasrDAO acquiring it in a government auction for $4 million.

On top of the album's highly exclusive nature, it has a condition where it can't be "commercially exploited for 88 years" by any subsequent owners. The collective's argument stems from claims that Shkreli admitted in livestreams that he made copies of the album and played it for his followers, even allegedly posting "LOL i have the mp3s you moron" in response to a member of PleasrDAO posting a photo of the album. If PleasrDAO wins the case, Shkreli will have to give up any copies of the album, as well as provide info on all copies, who they were distributed to and what profits he made from it.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/martin-shkreli-has-to-face-claims-of-copying-one-of-a-kind-wu-tang-clan-album-174730388.html?src=rss

Apple is reportedly nearing production for its M5 MacBooks

The latest Apple silicon is about to hit the assembly lines, according to Bloomberg's Mark Gurman. In the latest Power On newsletter, Gurman said that Apple "is nearing mass production of its next MacBook Pros, MacBook Airs and two new Mac monitors." Gurman added that these upgraded products are scheduled for release sometime between the end of this year and the first quarter of next year.

Earlier this year, Gurman noted that Apple was expected to start production on the M5 MacBook Pro during the second half of 2025. All signs seem to point toward Apple following its typical release schedule, where the latest MacBook Pro makes its fall debut, followed by the reveal of the upgraded MacBook Air in the spring. However, Gurman previously mentioned in a July edition of his newsletter that "Apple is now internally targeting a launch early next year" for the MacBook Pro instead.

Beyond the upcoming MacBooks, we're expecting one of the two Mac monitors to be the upgraded Studio Display. First released in March 2022, Apple's Studio Display could use a refresh, which some rumors say will include a mini-LED display, along with overall improvements to brightness and color quality.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/apple-is-reportedly-nearing-production-for-its-latest-m5-powered-macbooks-154148070.html?src=rss

The best October Prime Day deals you can get right now: Early sales on tech from Apple, Amazon, Samsung, Anker and more

Now that we know October Prime Day is on the horizon, it’s time to start thinking about what you may want to snag at a discount during the sale. If you pay the $139 annual fee for Prime, sale events like these are a great time to stock up on essentials and cross things off your wishlist while you can save some money.

Most discounts will be exclusively available to Prime subscribers, but there are always a few that anyone shopping on Amazon can grab. Similarly, there are always early deals in the days and weeks leading up to Prime Day, and this year is no different. Here, we’ve collected the best Prime Day deals you can shop for right now and we’ll keep updating this post as we get close to Prime Day proper.

Anker Nano 5K ultra-slim power bank (Qi2, 15W) for $46 (16 percent off): A top pick in our guide to the best MagSafe power banks, this super-slim battery is great for anyone who wants the convenient of extra power without the bulk. We found its proportions work very well with iPhones, and its smooth, matte texture and solid build quality make it feel premium.

Leebein 2025 electric spin scrubber for $40 (43 percent off, Prime exclusive): This is an updated version of my beloved Leebein electric scrubber, which has made cleaning my shower easier than ever before. It comes with seven brush heads so you can use it to clean all kinds of surfaces, and its adjustable arm length makes it easier to clean hard-to-reach spots. It's IPX7 waterproof and recharges via USB-C.

Apple Mac mini (M4) for $499 ($100 off): If you prefer desktops over laptops, the upgraded M4 Mac mini is one that won’t take up too much space, but will provide a ton of power at the same time. Not only does it come with an M4 chipset, but it also includes 16GB of RAM in the base model, plus front-facing USB-C and headphone ports for easier access.

Jisulife Life7 handheld fan for $25 (14 percent off, Prime exclusive): This handy little fan is a must-have if you life in a warm climate or have a tropical vacation planned anytime soon. It can be used as a table or handheld fan and even be worn around the neck so you don't have to hold it at all. Its 5,000 mAh battery allows it to last hours on a single charge, and the small display in the middle of the fan's blades show its remaining battery level.

Blink Mini 2 security cameras (two-pack) for $35 (50 percent off): Blink makes some of our favorite security cameras, and the Mini 2 is a great option for indoor monitoring. It can be placed outside with the right weatherproof adapter, but since it needs to be plugged in, we like it for keeping an eye on your pets while you're away and watching over entry ways from the inside.

JBL Go 4 portable speaker for $40 (20 percent off): The Go 4 is a handy little Bluetooth speaker that you can take anywhere you go thanks to its small, IP67-rated design and built-in carrying loop. It'll get seven hours of playtime on a single charge, and you can pair two together for stereo sound.

Apple MagSafe charger (25W, 2m) for $35 (30 percent off): The latest version of Apple's MagSafe puck is Qi2.2-certified and supports up to 25W of wireless power when paired with a 30W adapter. The two-meter cable length on this particular model gives you more flexibility on where you can use it: in bed, on the couch, at your desk and elsewhere.

Apple Watch Series 11 for $389 ($10 off): The latest flagship Apple Watch is our new pick for the best smartwatch you can get, and it's the best all-around Apple Watch, period. It's not too different from the previous model, but Apple promises noticeable gains in battery life, which will be handy for anyone who wants to wear their watch all day and all night to track sleep.

Apple iPad (A16) for $299 ($50 off): The new base-model iPad now comes with twice the storage of the previous model and the A16 chip. That makes the most affordable iPad faster and more capable, but still isn't enough to support Apple Intelligence.

Samsung EVO Select microSD card (256GB) for $23 (15 percent off): This Samsung card has been one of our recommended models for a long time. It's a no-frills microSD card that, while not the fastest, will be perfectly capable in most devices where you're just looking for simple, expanded storage.

Anker Soundcore Select 4 Go speaker for $26 (26 percent off, Prime exclusive): This small Bluetooth speaker gets pretty loud for its size and has decent sound quality. You can pair two together for stereo sound as well, and its IP67-rated design will keep it protected against water and dust.

Roku Streaming Stick Plus 2025 for $29 (27 percent off): Roku makes some of the best streaming devices available, and this small dongle gives you access to a ton of free content plus all the other streaming services you could ask for: Netflix, Prime Video, Disney+, HBO Max and many more.

Anker 622 5K magnetic power bank with stand for $34 (29 percent off, Prime exclusive): This 0.5-inch thick power bank attaches magnetically to iPhones and won't get in your way when you're using your phone. It also has a built-in stand so you can watch videos, make FaceTime calls and more hands-free while your phone is powering up.

Amazon Fire TV Stick 4K Max for $40 (33 percent off): Amazon's most powerful streaming dongle supports 4K HDR content, Dolby Vision and Atmos and Wi-Fi 6E. It also has double the storage of cheaper Fire TV sticks.

Anker Soundcore Space A40 for $45 (44 percent off): Our top pick for the best budget wireless earbuds, the Space A40 have surprisingly good ANC, good sound quality, a comfortable fit and multi-device connectivity.

Anker MagGo 10K power bank (Qi2, 15W) for $63 (22 percent off, Prime exclusive): A 10K power bank like this is ideal if you want to be able to recharge your phone at least once fully and have extra power to spare. This one is also Qi2 compatible, providing up to 15W of power to supported phones.

Levoit Core 200S smart air purifier for $70 ($20 off, Prime exclusive): This compact air purifier cleans the air in rooms up to 140 square feet and uses a 3-in-1 filter that removes microscopic dust, pollen and airborne particles. It has a mobile app that you can use to set runtime schedules, and it works with Alexa and Google Assistant voice commands.

Amazon Fire TV Cube for $100 (29 percent off): Amazon's most powerful streaming device, the Fire TV Cube supports 4K, HDR and Dolby Vision content, Dolby Atmos sound, Wi-Fi 6E and it has a built-in Ethernet port. It has the most internal storage of any Fire TV streaming device, plus it comes with an enhanced Alexa Voice Remote.

Rode Wireless Go III for $199 (30 percent off): A top pick in our guide to the best wireless microphones, the Wireless Go III records pro-grade sound and has handy extras like onboard storage, 32-bit float and universal compatibility with iPhones, Android, cameras and PCs.

Shark AI robot vacuum with self-empty base for $230 (58 percent off, Prime exclusive): A version of one of our favorite robot vacuums, this Shark machine has strong suction power and supports home mapping. The Shark mobile app lets you set cleaning schedules, and the self-empty base that it comes with will hold 30 days worth of dust and debris.

Levoit LVAC-300 cordless vacuum for $250 ($100 off, Prime exclusive): One of our favorite cordless vacuums, this Levoit machine has great handling, strong suction power for its price and a premium-feeling design. Its bin isn't too small, it has HEPA filtration and its battery life should be more than enough for you to clean your whole home many times over before it needs a recharge.

Shark Robot Vacuum and Mop Combo for $300 (57 percent off, Prime exclusive): If you're looking for an autonomous dirt-sucker that can also mop, this is a good option. It has a mopping pad and water reservoir built in, and it supports home mapping as well. Its self-emptying base can hold up to 60 days worth of debris, too.

Nintendo Switch 2 for $449: While not technically a discount, it's worth mentioning that the Switch 2 and the Mario Kart Switch 2 bundle are both available at Amazon now, no invitation required. Amazon only listed the new console for the first time in July after being left out of the initial pre-order/availability window in April. Once it became available, Amazon customers looking to buy the Switch 2 had to sign up to receive an invitation to do so. Now, that extra step has been removed and anyone can purchase the Switch 2 on Amazon.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-best-october-prime-day-deals-you-can-get-right-now-early-sales-on-tech-from-apple-amazon-samsung-anker-and-more-050801285.html?src=rss

Only 'two percent' of Escape from Tarkov players may get to see its best ending

Escape from Tarkov players may finally get the chance to escape from the fictional war-torn city in northwest Russia, but it won't be easy. During a live Q&A at Tokyo Game Show, Nikita Buyanov, the game's director, told the audience that there will be four endings that players can achieve, which will be determined by the playthrough's completion and progression. Buyanov added that the "best ending" will be "really hard" and "not everyone will escape from Tarkov."

"I think it will be something around two percent of all of the player base," Buyanov said of how many players the team expects to reach the toughest ending. "It will be really challenging, and you can treat it as an achievement of your life to finally escape from Tarkov."

After being in beta for more than eight years, Escape from Tarkov is scheduled for a 1.0 release, along with its debut on Steam. Even after the November release, Buyanov said that the team still has a ton of content planned for the game's future over the next five years. Much of the new content is still being kept under wraps, as is whether or not the developer plans to do another wipe before the official release that would reset player progression.

Buyanov said during the Q&A that there will be seasonal characters subject to typical wipes, along with a permanent main character that can retain progress indefinitely. Buyanov later posted on X that there will be no wipe and that the team will implement "softcore settings," which allow for some experimentation before release, in comparison to the hardcore wipe that took place in July.

Update, September 28 2025, 10:40AM ET: This story has been updated to reflect Buyanov's post on X stating that there will be no wipe for Escape from Tarkov before its release.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/only-two-percent-of-escape-from-tarkov-players-may-get-to-see-its-best-ending-174416980.html?src=rss

The Apple Watch Series 11 gets its first discount

Despite coming out just a couple of weeks ago, the Apple Watch Series 11 is already discounted at Amazon. You can pick up one of the smartwatches for $10 off, starting at $389 right now. Apple revealed the latest generation of its wearable at its iPhone 17 event in Cupertino.

The Series 11 packs some new features like 5G connectivity on cellular models, a more scratch-resistant screen, new sleep features, improved battery life and a hypertension alert system that just received FDA clearance. The GPS-only version is our top pick for Best Apple Watch in 2025.

In our hands-on review, we gave the Apple Watch Series 11 a score of 90 out of 100, noting its thin and light design, the excellent battery life, a nifty new wrist-flick gesture and its comprehensive approach to health and fitness monitoring. It is relatively pricey however, and the Watch SE 3 is probably enough for most users, but the Series 11 has a brighter and larger display, a thinner design, longer battery life and more advanced health features.

For anyone who hasn't bought a new Apple Watch in a few years, the Series 11 is a worthy upgrade. If you're in the market for your first Apple Watch, then this model would be a great one to start with. If you're rocking a Series 10, then you probably don't need to upgrade now unless the improved battery life will mean that much to you.

The Apple Watch Series 11 is available on Amazon in all sizes, colors and connectivity options. There are a few case color and band combinations that are $10 off Apple's base price.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-apple-watch-series-11-gets-its-first-discount-135020658.html?src=rss

Two Blink Mini 2 cameras are on sale for $35 in this Prime Day deal

October Prime Day is right around the corner, but you can grab some good deals today. Blink security cameras are almost always on sale during Amazon's shopping events, and this time is no different. One of the best deals at the moment is on a duo of Blink Mini 2 cameras, which you can get for only $35. That's half off and a record-low price, not to mention less than what you'd typically pay for one full price. It's also Engadget's pick for the best budget security camera.

This is the newest (2024) model of Blink's budget wired model. The camera is well-suited for nighttime video: It has a built-in LED spotlight, color night vision and a low-light sensor. Day or night, it records in sharp 1080p resolution. It also has a wider field of view than its predecessor.

The Blink Mini 2 is primarily designed for indoor use. But you can use it outdoors, too. You'll just need to fork over $10 for a weather-resistant adapter. Wherever you use the camera, it works with Alexa and supports two-way audio. ("Hello, doggy, I'll be home soon.")

It also supports person detection. (That's a neat feature that differentiates between people and other types of movement.) However, the feature requires a Blink Subscription Plan. They start at $3 per month or $30 per year for one device.

The camera is available in black or white. Both colors are available for the $35 Prime Day deal, but they can't be mixed unless you buy each separately. It's worth noting that this deal is open to anyone — no Prime subscription necessary. You can also save on a bunch of other Blink (and Ring) security gear. The Blink Outdoor 4 cameras are some of our favorites, and most configurations are on sale for Prime Day, including bundles like this three-camera system that's 61 percent off.

This article originally appeared on Engadget at https://www.engadget.com/deals/two-blink-mini-2-cameras-are-on-sale-for-35-in-this-prime-day-deal-201049652.html?src=rss

Apple's 25W MagSafe charger is cheaper than ever right now

Whether you picked up a new iPhone 17 recently or you have an older model, you can pick up one of Apple's own chargers at a discount thanks to a rare sale. Apple's 25W MagSafe charger with a two-meter cable is on sale for $35 — 29 percent off its usual price.

Believe it or not, this sale actually makes the two-meter version cheaper than the one-meter version. The latter at the moment would set you back $39.

If you have an iPhone 16, iPhone 17 or iPhone Air, this cable can charge your device at 25W as long as it's connected to a 30W power adapter on the other end. While you'll need a more recent iPhone to get the fastest MagSafe charging speeds, the charger can wirelessly top up the battery of any iPhone from the last eight years (iPhone 8 and later). With older iPhones, the charging speed tops out at 15W. The cable works with AirPods wireless charging cases too — it's certified for Qi2.2 and Qi charging.

The MagSafe charger is one of our favorite iPhone accessories, and would pair quite nicely with your new iPhone if you're picking up one of the latest models. If you're on the fence about that, be sure to check out our reviews of the iPhone 17, iPhone Pro/Pro Max and iPhone Air.

This article originally appeared on Engadget at https://www.engadget.com/deals/apples-25w-magsafe-charger-is-cheaper-than-ever-right-now-143415557.html?src=rss

How to record a phone call on an iPhone

With iOS 26, Apple has expanded its native call recording feature with transcripts, Live Translation, summaries and tighter integration with Notes. It’s a more polished and useful tool than before, especially if you rely on your iPhone for interviews, meetings or important conversations.

Call recording itself first arrived with iOS 18.1 in October 2024. The feature has always been region- and language-dependent, and that hasn’t changed. If it’s available where you live, you can capture calls directly from the Phone app without third-party apps or hardware. If it’s not, there are still alternative methods worth knowing about. Here’s how it works, plus what to do if the option isn’t available in your country.

First, confirm that the feature is supported in your region. Apple maintains a feature availability page that lists countries where call recording isn’t offered, including the European Union, Saudi Arabia and South Africa. If your country is on that list, you won’t see the option in the Phone app.

Before recording your phone call, you’ll need the consent of the person on the other end of the line. When you start recording, both parties hear an audio notice stating that the call is being recorded.

Recording a call is straightforward:

  1. Open the Phone app.

  2. Start or answer a call.

  3. During the call, tap the More button.

  4. Select Call Recording.

The call continues as normal, but the iPhone automatically saves the audio once you hang up or tap Stop. You’ll find all recordings in iOS’ native Notes app, inside a folder called Call Recordings.

To listen back, open Notes, go to the Call Recordings folder, and tap the file you want. Tap Play to hear it.

From here, you can:

  • Search: Tap the More button and select Find in Transcript.

  • Copy: Tap the More button and select Add Transcript to Note or Copy Transcript.

  • Save: Tap the More button and select Save Audio Files, then select where you want to save recording (another folder or app).

  • Share: Tap the More button and select Share Audio, then select how you want to share the recording

  • Delete: Tap the More button and select Delete. This deletes the recording and any related transcript.

If your region and language are supported, iOS 26 also transcribes calls. Open a recording in Notes, then tap Show Transcript and Summary. Processing might take a few moments, but once it’s ready, you’ll see the conversation broken down by speaker. From there, you can search the text, copy it into another note or tap a line to jump to that part of the audio. Apple warns transcripts may not be flawless, so double check to make sure important details are correct.

With Apple Intelligence switched on, you’ll also get a generated summary of the call. This is handy if you only need the highlights — for example, the action items from a meeting or the main points of an interview. Summaries appear alongside the transcript in Notes.

By default, call recording is enabled on supported devices. If you don’t want the option at all, navigate to Settings, select Apps, then Phone, tap Call Recording and toggle it off.

If you’re in a region where the built-in feature doesn’t appear, or you’re running a previous version of iOS, there are still other ways to record calls.

In the US, federal law dictates one-party consent. This means you can record a phone call as long as you are actively participating in the conversation. However, it is important that you check state laws (in the US) or relevant laws in your country before recording a phone call. Note that these options don’t integrate with Apple Notes or Apple Intelligence, but they give you a backup if the official method isn’t supported where you live.

  • Rev Call Recorder (US only) is free to use on your iPhone. There are no in-app ads or time constraints, allowing you to record high-quality audio via the app.

  • Google Voice (US only) lets you record incoming calls via the app by pressing “4” on the keypad. The audio file appears in your Google Voice inbox afterward. The function is restricted to incoming calls, and features will depend on the account you have.

  • External recorders: You can connect a small recorder to your iPhone through USB-C or Lightning, or place a digital recorder next to your phone on speaker mode. This keeps everything offline, but audio quality can vary.

  • Speakerphone: If you have access to multiple devices, you can place your call on speakerphone and simultaneously use a separate device with the Voice Memos app open to record your call. While the sound quality is unlikely to be on par with other alternatives, it is a feasible option.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/how-to-record-a-phone-call-on-an-iphone-120058707.html?src=rss


Konami Believes in ‘Silent Hill f’ So Much, It’s Becoming a Novel

Silent Hill F Hed

If you need more 'Silent Hill f' in your life, Konami's got a book adaptation ready and waiting for you this October.

The ‘Stranger Things’ Brothers Tease Their Paramount Plans

Stranger Things Duffers

After hitting it big with 'Stranger Things,' can the Duffer Brothers find similar success at Paramount?

‘Fortnite’ Ditches ‘Peacemaker’ Emote After the Show’s Big Reveal

Peacemaker John Cena Dc StudiOS

Chris ain't the only one shellshocked by how this week's 'Peacemaker' ended, and now 'Fortnite' has another emote controversy on its hands.

Ray-Ban Meta Gen 2 Review: Still the Best Non-Display Smart Glasses

Ray Ban Meta Gen 2 05

These aren't the smart glasses of the future, but they're still better than the original.

‘GoldenEye’ is Coming Back to Theaters Next Week

007 Goldeneye

Celebrate James Bond Day at the start of October by revisiting 'GoldenEye' again on the big screen.

China Finds Ingenious Solution for Its Decommissioned Wind Turbine Blades

Wind Turbine Sand Barriers

Researchers argue that retired or damaged wind turbine blades could be reurposed into durable sand barriers.

Oldest Shell Jewelry Workshop in Western Europe Dates Back 42,000 Years

Seashells

Not much is known about the mysterious, prehistoric Châtelperronian people, but they did leave behind some tantalizing clues.

This Wireless Tech Could Fix the Most Annoying Thing About Using Wireless Earbuds at Home

Xiaomi Buds 5 Pro Xpan 1

Range anxiety for Bluetooth wireless earbuds might be a thing of the past soon.


Google's Agent Development Kit for Java Adds Integration with LangChain4j

The latest release of the Agent Development Kit for Java, version 0.2.0, marks a significant expansion of its capabilities through the integration with the LangChain4j LLM framework, which opens it up to all the large language models supported by the framework.

By Sergio De Simone


Europe Win Emotional Ryder Cup Triumph After US Fightback

Europe fought off a thrilling United States rally to win an emotional Ryder Cup on Sunday with Irishman Shane Lowry securing the trophy on a dramatic six-foot birdie putt on the 18th hole.

Europe Must Step Up Efforts To Protect Environment: Report

Europe is a world leader in the fight against climate change but must do more to protect its environment and improve its resilience against global warming, the European Union's environment agency warned on Monday.

Two Dead After US Shooting, Fire At Mormon Church

At least two people were killed and several others injured Sunday after a shooter targeted a Mormon church in Michigan, authorities said, in the latest deadly tragedy that US President Donald Trump called part of a national "epidemic of violence."

Stars Turn Out For Armani's Final Collection In Milan

Hollywood stars Cate Blanchett, Glenn Close and Richard Gere turned out Sunday for the Giorgio Armani show in Milan, the final collection the Italian designer worked on before his death.

Starmer Warns UK Labour In 'Fight Of Our Lives' As Party Meets

A pep-talk from Australian leader Anthony Albanese kick-started UK Labour's annual conference Sunday, with Prime Minister Keir Starmer struggling to convince nervous members that he can lead the "fight of our lives" against the surging hard-right.

Indian Actor-politician's Aides Charged After Rally Stampede Kills 40

Police charged three close aides of a popular actor and politician with culpable homicide and negligence on Sunday after a stampede at his rally in southern India killed at least 40 people, officials said.

Slips, Salt And Stripes: Key Looks From Milan Fashion Week

The main shows at Milan Fashion Week wrap up Sunday after another season of knock-out dresses, immaculate tailoring, leather coats and glorious handbags.

Massive Crowd, Chaos Preceded Deadly India Rally Stampede

A stampede that killed dozens at a south India political rally happened after a crowd of thousands waited hours in baking heat without sufficient safeguards, officials and witnesses said Sunday.


AT&T will sell you the iPhone Air for $830 off right now - how to qualify for the deal

Travel light this season with a brand-new iPhone Air for up to $830 off when you trade in your iPhone at AT&T.

Do you still need USB-C charging cables if this portable battery exists? I tested it to find out

Drones, action cameras, pocket-sized gimbal cameras, and mics all have voracious appetites for power. This Baseus power bank will do the job.

AI is every developer's new reality - 5 ways to make the most of it

These financial services executives explain how they balance automation with compliance – and there are important lessons for all business leaders.

Addicted to making lists? Here are my top apps for Windows and MacOS

I wouldn't get anything done if I didn't make lists. These two apps - for MacOS and Windows - are my most recommended.

Should you buy a Windows mini PC in 2025? My verdict after a week of testing

I've tested my share of mini PCs, but Geekom's A9 Max stands out for its combination of power and price.


AT&T will sell you the iPhone Air for $830 off right now - how to qualify for the deal

Travel light this season with a brand-new iPhone Air for up to $830 off when you trade in your iPhone at AT&T.

Do you still need USB-C charging cables if this portable battery exists? I tested it to find out

Drones, action cameras, pocket-sized gimbal cameras, and mics all have voracious appetites for power. This Baseus power bank will do the job.

AI is every developer's new reality - 5 ways to make the most of it

These financial services executives explain how they balance automation with compliance – and there are important lessons for all business leaders.

Addicted to making lists? Here are my top apps for Windows and MacOS

I wouldn't get anything done if I didn't make lists. These two apps - for MacOS and Windows - are my most recommended.

Should you buy a Windows mini PC in 2025? My verdict after a week of testing

I've tested my share of mini PCs, but Geekom's A9 Max stands out for its combination of power and price.


This AI Research Proposes an AI Agent Immune System for Adaptive Cybersecurity: 3.4× Faster Containment with <10% Overhead

Can your AI security stack profile, reason, and neutralize a live security threat in ~220 ms—without a central round-trip? A team of researchers from Google and University of Arkansas at Little Rock outline an agentic cybersecurity “immune system” built from lightweight, autonomous sidecar AI agents colocated with workloads (Kubernetes pods, API gateways, edge services). Instead […]

The post This AI Research Proposes an AI Agent Immune System for Adaptive Cybersecurity: 3.4× Faster Containment with <10% Overhead appeared first on MarkTechPost.

Gemini Robotics 1.5: DeepMind’s ER↔VLA Stack Brings Agentic Robots to the Real World

Can a single AI stack plan like a researcher, reason over scenes, and transfer motions across different robots—without retraining from scratch? Google DeepMind’s Gemini Robotics 1.5 says yes, by splitting embodied intelligence into two models: Gemini Robotics-ER 1.5 for high-level embodied reasoning (spatial understanding, planning, progress/success estimation, tool-use) and Gemini Robotics 1.5 for low-level visuomotor […]

The post Gemini Robotics 1.5: DeepMind’s ER↔VLA Stack Brings Agentic Robots to the Real World appeared first on MarkTechPost.


AI systems can easily lie and deceive us—a fact researchers are painfully aware of

In the classic film "2001: A Space Odyssey," astronaut Dave Bowman asks the ship's artificial intelligence, HAL 9000, to open the pod bay doors to let him back into the spaceship. HAL refuses: "I'm sorry, Dave. I'm afraid I can't do that."


AI systems can easily lie and deceive us—a fact researchers are painfully aware of

In the classic film "2001: A Space Odyssey," astronaut Dave Bowman asks the ship's artificial intelligence, HAL 9000, to open the pod bay doors to let him back into the spaceship. HAL refuses: "I'm sorry, Dave. I'm afraid I can't do that."


Upcoming Digital Event: Open Accelerated Computing Summit

Join the OAC Summit on October 7-8 and dive into recent research at the crossroads of AI and HPC.


Photographing MotoGP on the OM-1: Pro Performance and Packable Prowess

Three motorcycle racers in colorful racing suits and helmets lean sharply into a turn on a racetrack, competing closely. The background features a blurred "TRIUMPH" banner.

When it comes to market share of professional photographers, OM System is probably a statistical rounding error compared to the powerhouses of Canon, Sony, and Nikon.

[Read More]

New Digital Art Restoration Method Can Save the World’s Dying Artworks

Four overlapping versions of a Renaissance Nativity painting are shown, with each layer revealing different restoration or imaging techniques, including color overlays and digitally enhanced sections.

Most of the world’s art is locked away and will never be seen. This isn’t due to any nefarious reason, simply the ravages of time -- historical paintings are degraded and can require months, years, or even a decade of sluggish, painstaking restoration. As a result, about 70% of paintings in institutional collections are not on public view.

[Read More]


Taking Tylenol While Pregnant Is Safer Than Untreated Fevers, Doctors Say

Untreated fevers during pregnancy can cause more harm than taking acetaminophen will

People Are More Likely to Cheat When They Use AI

Participants in a new study were more likely to cheat when delegating to AI—especially if they could encourage machines to break rules without explicitly asking for it


Quoting Nick Turley

We’ve seen the strong reactions to 4o responses and want to explain what is happening.

We’ve started testing a new safety routing system in ChatGPT.

As we previously mentioned, when conversations touch on sensitive and emotional topics the system may switch mid-chat to a reasoning model or GPT-5 designed to handle these contexts with extra care. This is similar to how we route conversations that require extra thinking to our reasoning models; our goal is to always deliver answers aligned with our Model Spec.

Routing happens on a per-message basis; switching from the default model happens on a temporary basis. ChatGPT will tell you which model is active when asked.

Nick Turley, Head of ChatGPT, OpenAI

<p>Tags: <a href="https://simonwillison.net/tags/generative-ai">generative-ai</a>, <a href="https://simonwillison.net/tags/openai">openai</a>, <a href="https://simonwillison.net/tags/chatgpt">chatgpt</a>, <a href="https://simonwillison.net/tags/ai">ai</a>, <a href="https://simonwillison.net/tags/llms">llms</a>, <a href="https://simonwillison.net/tags/nick-turley">nick-turley</a></p>

Oura CEO talks potential IPO and ‘nonnegotiable’ data privacy

In a recent interview with The New York Times, Oura Health CEO Tom Hale didn’t discuss reports that the company is raising new funding that would value the health-tracking ring maker at nearly $11 billion, but he did talk about whether he has ambitions to take Oura public.

DJI loses lawsuit over classification as Chinese military company

A federal judge has rejected drone maker DJI’s efforts to get off a Department of Defense list of Chinese military companies.

The billion-dollar infrastructure deals powering the AI boom

Here's everything we know about the biggest AI infrastructure projects, including major spending from Meta, Oracle, Microsoft, Google, and OpenAI.

TechCrunch Mobility: Self-driving trucks startup Kodiak goes public and a shake-up at Hyundai’s Supernal

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation.

Do EA buyout talks hint at bigger industry troubles?

Why is Electronic Arts, one of the biggest names in the video game business, reportedly in talks to go private?

Lootlock protects kids from overspending on gaming and will be presenting at TechCrunch Disrupt 2025

The gaming industry notoriously targets kids for in-app purchases. Lootlock gives parents granular control over kids game spending.

Wiz chief technologist Ami Luttwak on how AI is transforming cyberattacks

Ami Luttwak, CTO of Wiz, breaks down how AI is changing cybersecurity, why startups shouldn't write a single line of code before thinking about security, and opportunities for upstarts in the industry.


An investigation reveals how the Russian military spy ship Yantar is being used to map and potentially intercept undersea telecommunication cables across Europe (Financial Times)

Financial Times:
An investigation reveals how the Russian military spy ship Yantar is being used to map and potentially intercept undersea telecommunication cables across Europe  —  Covert operations in waters surrounding the British Isles pose a grave threat to critical infrastructure and a fresh challenge to Nato

A therapist details treating ChatGPT as a "patient", describing its programmed self-critique as "a brilliant means of seducing a techno-skeptical therapist" (Gary Greenberg/New Yorker)

Gary Greenberg / New Yorker:
A therapist details treating ChatGPT as a “patient”, describing its programmed self-critique as “a brilliant means of seducing a techno-skeptical therapist”  —  When I played doctor with the chatbot, the simulated patient confessed problems that are real—and that should worry all of us.

A profile of digital microlender Tala, which has an annualized revenue of $340M but remains unprofitable 11 years in, as it plans to double its lending in 2027 (Jeff Kauflin/Forbes)

Jeff Kauflin / Forbes:
A profile of digital microlender Tala, which has an annualized revenue of $340M but remains unprofitable 11 years in, as it plans to double its lending in 2027  —  Before starting Tala, founder and CEO Shivani Siroya was a Wall Street analyst and studied microloans for the United Nations.

How JD Vance played a key role in the TikTok US deal, amid concerns about the VP acting as a corporate dealmaker in the forced spin-off of a private company (Washington Post)

Washington Post:
How JD Vance played a key role in the TikTok US deal, amid concerns about the VP acting as a corporate dealmaker in the forced spin-off of a private company  —  The creation of a U.S. spin-off, which has raised questions about censorship and favor for allies, will almost certainly be touted by Vance as a top achievement.

Since 2019, Brazil's courts have developed or implemented over 140 AI projects that have helped make the country's overburdened judicial system more efficient (Pedro Nakamura/Rest of World)

Pedro Nakamura / Rest of World:
Since 2019, Brazil's courts have developed or implemented over 140 AI projects that have helped make the country's overburdened judicial system more efficient  —  Brazil's overburdened courts and lawyers are adopting artificial intelligence.  But experts wonder whether it serves justice.

Turning compute into a tradable commodity could fuel the next stage of the AI boom, just like oil futures and spectrum auctions unlocked waves of investment (Felix Salmon/Bloomberg)

Felix Salmon / Bloomberg:
Turning compute into a tradable commodity could fuel the next stage of the AI boom, just like oil futures and spectrum auctions unlocked waves of investment  —  Just as oil futures and spectrum auctions unlocked waves of investment, turning compute into a tradable commodity will be needed to fuel AI's next stage.

A book excerpt details how a small team of content curators hired by ByteDance in Mexico City in 2018 shaped TikTok's For You algorithm in Latin America (Emily Baker-White/Forbes)

Emily Baker-White / Forbes:
A book excerpt details how a small team of content curators hired by ByteDance in Mexico City in 2018 shaped TikTok's For You algorithm in Latin America  —  An exclusive excerpt from Every Screen On The Planet reveals how the social media app's powerful recommendation engine was shaped …

Apple's internal chatbot deserves a public release; sources: Apple nears production of M5 MacBook Pros, MacBook Airs, and two monitors for Q4 2025 or Q1 2026 (Mark Gurman/Bloomberg)

Mark Gurman / Bloomberg:
Apple's internal chatbot deserves a public release; sources: Apple nears production of M5 MacBook Pros, MacBook Airs, and two monitors for Q4 2025 or Q1 2026  —  Apple should release its internal ChatGPT-like app publicly to give its revamped AI system more credibility.

How Mode to Code, a nonprofit founded by a teenager, is teaching Bay Area seniors to use AI and avoid scams through free coding and tech literacy classes (CNN)

CNN:
How Mode to Code, a nonprofit founded by a teenager, is teaching Bay Area seniors to use AI and avoid scams through free coding and tech literacy classes  —  Jacob Shaul is the kind of rising high school student who spends Saturdays playing chess and devours books by Malcom Gladwell and Angela Duckworth.

Education software company EdSights, which uses SMS texting and AI to reach students and identify those at risk of dropping out, raised $80M from JMI Equity (Chris Metinko/Axios)

Chris Metinko / Axios:
Education software company EdSights, which uses SMS texting and AI to reach students and identify those at risk of dropping out, raised $80M from JMI Equity

A look at AI-powered "nudify" tools, which make it fast and easy to make nonconsensual, deepfake porn, and the limited legal options available to their victims (Jonathan Vanian/CNBC)

Jonathan Vanian / CNBC:
A look at AI-powered “nudify” tools, which make it fast and easy to make nonconsensual, deepfake porn, and the limited legal options available to their victims  —  In June of last year, Jessica Guistolise received a text message that would change her life.

How Nvidia's large investments in AI startups and data centers act like a form of financial stimulus, potentially artificially inflating demand for its GPUs (The Information)

The Information:
How Nvidia's large investments in AI startups and data centers act like a form of financial stimulus, potentially artificially inflating demand for its GPUs  —  Even by the standards of one of the most prodigious dealmakers in tech, the past month or so has been a head-spinning one for Nvidia's Jensen Huang.

A look at the implosion of CaaStle, a clothing inventory monetization startup whose ex-CEO Christine Hunsicker is accused of swindling investors out of $300M (Bloomberg)

Bloomberg:
A look at the implosion of CaaStle, a clothing inventory monetization startup whose ex-CEO Christine Hunsicker is accused of swindling investors out of $300M  —  CaaStle pulled off one of the biggest financial heists in recent tech history, according to the SEC and DOJ.


Streaming YouTube over dial-up: how one creator hit 668 kbps with 12 modems


The YouTube channel The Serial Port has pulled off something few imagined possible in the broadband era: streaming YouTube over a dial-up connection. In their latest experiment, the team bonded 12 modems together using Multilink PPP, reaching a combined download speed of 668 kbps on a Windows XP desktop –...

Read Entire Article

$300 GeForce vs. $300 Radeon GPU: Four Generations, Head to Head


We compare four generations of $300 GeForce and Radeon GPUs to see how budget gaming has evolved, from crypto-fueled chaos to today's AI-driven market, and which card delivers the best value now.



Read Entire Article


ChatGPT quietly switches to a stricter language model when users submit emotional prompts

OpenAI's ChatGPT automatically switches to a more restrictive language model when users submit emotional or personalized prompts, but users aren't notified when this happens.

The article ChatGPT quietly switches to a stricter language model when users submit emotional prompts appeared first on THE DECODER.

We risk a deluge of AI-written "science" pushing corporate interests – here’s what to do about it

Yellow document pages float on a dark blue-green, wave-like background and symbolise a flood of documents.

AI is making it easier than ever to flood academic journals with misleading studies. Guest author David Comerford argues that urgent peer review reform is needed to protect trust in science.

The article We risk a deluge of AI-written "science" pushing corporate interests – here’s what to do about it appeared first on THE DECODER.

Study claims 78 training examples are enough to build autonomous agents

A new study challenges a core assumption in AI: instead of massive datasets, just 78 carefully chosen training examples may be enough to build superior autonomous agents.

The article Study claims 78 training examples are enough to build autonomous agents appeared first on THE DECODER.


Tor: The Easiest Way to Securely Browse the Web on Linux

Unless you’ve been hiding under a rock, security and privacy are two very hot topics. There’s a good reason for

The post Tor: The Easiest Way to Securely Browse the Web on Linux appeared first on The New Stack.

Go Experts: ‘I Don’t Want to Maintain AI-Generated Code’

It was a moment for our time — two long-time Go programmers pondering the future of their language, and how

The post Go Experts: ‘I Don’t Want to Maintain AI-Generated Code’ appeared first on The New Stack.


Exclusive: The enterprise AI playbook

Bring AI to your data anywhere with security, trust, and governance baked in


The Sequence Radar #727: Qwen’s One‑Week Gauntlet

Alibaba Qwen is pushing new models at an incredible pace.


Apple’s ‘Veritas’ chatbot is reportedly an employee-only test of Siri’s AI upgrades

According to Bloomberg’s Mark Gurman Apple is testing Siri’s upcoming revamp using an internal chatbot called Veritas. The company’s struggles as it tries to keep pace in the AI race are no secret. The next-gen Siri has been delayed multiple times and the debut of Apple Intelligence was met with a tepid response. Veritas gives […]

Larry Ellison’s quest to run the world

For most of his career Larry Ellison has been content to quietly let Oracle be the company, behind the company, behind the technology that makes headlines. Its biggest products being cloud computing and database products that it sells to enterprise customers like DHL, Northwell Health, and Fanatics. But, now in his 80s, Ellison has begun […]

Trump posts, then pulls bizarre AI video promoting MedBed conspiracy

Donald Trump is no stranger to outlandish conspiracies or strange social media posts. But, by any measure, his post on Saturday night was particularly bizarre. The president posted (and later removed) a clip on Truth Social of a fake Fox News segment with Lara Trump detailing the White House’s announcement of the world’s first MedBed […]

I spent three months with Telly, the free TV that’s always showing ads

The last few months, I've felt like I'm living in a cyberpunk movie. Each night, when I get ready to wind down, I reach for the remote to turn on a TV I got for free. When I hit the power button, a 55-inch screen lights up, but so does a smaller display beneath it. […]

Good news: TechWoven is fine

As the nation's foremost FineWoven hater, I have some great news about Apple's follow-up: it doesn't suck. I've been using a TechWoven case on an iPhone 17 Pro for the past week and I have no complaints. I took it up a mountain; I stored it in the sweaty back pocket of my yoga pants […]

How the voice of Silksong’s Hornet brought her to life through gibberish

Silksong resembles the original Hollow Knight in many ways, though right from the start, you can hear one key difference: Hornet. Hollow Knight's protagonist was silent, but just as Hornet was voiced in that game, as the protagonist of Silksong, she has a voice, too, and actor Makoto Koji brings a lot of personality to […]

How generative AI boosters are trying to break into Hollywood

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the intersection of entertainment and technology, follow Charles Pulliam-Moore. The Stepback arrives in our subscribers' inboxes at 8AM ET. Opt in for The Stepback here. How it started In just a few short years, text-to-image models […]

I need a life cool enough for the new GoPro

Hi, friends! Welcome to Installer No. 99, your guide to the best and Verge-iest stuff in the world. (If you're new here, welcome, happy pumpkin spice season, and also you can read all the old editions at the Installer homepage.) This week, I've been reading about Lizzo and Uniqlo and book thieves and Max Verstappen, […]


A Look at FinReflectKG: AI-Driven Knowledge Graph in Finance

The next frontier in truth grounded on symbolic reasoning in finance

Last week at the Quant x AI event here in New York, I had the pleasure of seeing Fabrizio Dimino present a compelling paper he co-authored: “FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs.”

The paper tackles a critical challenge holding back financial AI. While we have powerful language models, they lack the structured, reliable symbolic reasoning systems needed to truly understand the complex world of finance. More specifically, building knowledge graphs (KGs) on regulatory documents like SEC filings is a massive hurdle.

The FinReflectKG paper offers a powerful solution with two key contributions: a new, open-source financial KG dataset and a novel framework for building it.

Here, I’ll summarize their brilliant approach and then propose a way to improving their monitoring for global semantic diversity in the model.

The Core Innovation: A “Self-Reflecting” AI Agent

The centerpiece of the paper’s methodology is a sophisticated, three-mode pipeline for extracting knowledge “triples” (like (Nvidia, Produces, GPUs)) from financial documents.

While they test simpler Single-Pass and Multi-Pass methods, their most innovative approach is the Reflection-driven agentic workflow.

This agentic process works like a team:

  • An Extraction LLM first takes a chunk of a document and extracts an initial set of knowledge triples.
  • A Critic LLM then reviews these triples, providing structured feedback on any issues it finds, such as ambiguous entities (e.g., using “We” instead of the company ticker) or non-standard relationship types.
  • A Correction LLM takes this feedback and refines the triples.

This feedback loop repeats until no more issues are found or a maximum number of steps is reached, systematically improving the quality of the extracted knowledge.

Proving Its Worth Without a Ground Truth

A major challenge in KG construction is evaluation, as there’s often no perfect “answer key” to compare against. The authors developed a holistic evaluation framework to address this, using several complementary methods:

  • CheckRules: A set of custom, rule-based checks to enforce quality and consistency. For example, rules automatically flag ambiguous subjects like “we” or “the company” and ensure all extracted entities and relationships comply with a predefined schema.
  • Coverage Ratios: Metrics to measure how comprehensively the KG captures the diversity of entities and relationships present in the source documents.
  • Semantic Diversity: An analysis using information theory (Shannon and Rényi entropy) to measure the balance and variety of the extracted knowledge, ensuring the graph isn’t overly skewed towards a few common concepts.
  • LLM-as-a-Judge: A comparative evaluation where a powerful LLM assesses the outputs of the three different extraction modes (single-pass, multi-pass, and reflection) across four key dimensions: Precision, Faithfulness, Comprehensiveness, and Relevance.

Key Findings and Verdict

The results clearly demonstrate the superiority of the reflection-agent mode. It consistently achieves the best balance of reliability and coverage.

  • It achieved the highest compliance score, passing 64.8% of all four strict CheckRules.
  • It extracted the most triples per document chunk (15.8) and had significantly higher entity and relationship coverage ratios than the other methods.
  • In the LLM-as-a-Judge evaluation, it was the clear winner in Precision, Comprehensiveness, and Relevance.

The primary trade-off is speed. The iterative feedback loop requires more computation, making it less suitable for real-time applications where a single-pass approach might be preferred.

Future Directions

The authors conclude by outlining their plans to significantly expand the project, including:

  • Enlarging the dataset to cover all S&P 500 companies over the last 10 years.
  • Developing a schema-free pipeline that can create ontologies from scratch, inspired by the “Extract-Define-Canonicalize” (EDC) framework.
  • Building Temporal Knowledge Graphs to capture the evolution of financial relationships over time, enabling causal reasoning for applications like thematic investing.

Suggestion: Monitoring Semantic Refinement via Cross-Entropy

The paper notes that the Reflection method, while improving compliance and coverage, reduces the diversity of the extracted elements as measured by absolute entropy. This is an expected outcome of a rule-constrained process. However, the authors’ proposal to monitor “when diversity falls below a predefined threshold” using absolute entropy presents a practical challenge: absolute entropy values are difficult to interpret and make thresholding arbitrary.

A more principled approach is to measure the relative change in the information landscape between the baseline and the refined output. I suggest a direct application of the Principle of Minimum Cross-Entropy (MinXEnt), an extension of Jaynes’ Maximum Entropy Principle (MaxEnt). In this case, we treat the distribution from the Single-Pass method as the prior and measure how the Reflection method’s distribution diverges from it at each step.

The ideal tool for this is cross-entropy, specifically the Kullback–Leibler (KL) Divergence, which quantifies the information gain or loss when one probability distribution is used to approximate another.

Proposed Methodology

The goal is to monitor the KL Divergence at each iteration t of the Reflection agent's refinement loop.

1. Establish the Prior Distribution (p): First, run the Single Pass method across the corpus. From the full set of extracted elements (entities, types, and relationships), derive a normalized frequency distribution:

This represents our baseline or prior understanding of the data’s structure.

2. Calculate Iterative Distributions (q(t)): For the Reflection method, at each step t of the iterative feedback loop for a given chunk of text c, derive the corresponding normalized frequency distribution:

where

and m is the stopping iteration for chunk c as per the stopping criteria in section 4.3.3.

3. Unify and Smooth: As the set of extracted elements will differ between the two methods for every t, a direct comparison is not possible. To solve this:

  • Create a unified vocabulary that is the union of all unique elements found in both p and all q(t).
  • Represent all distributions over this unified vocabulary. Any element not present in a given distribution will have an initial frequency of zero.
  • Apply Laplace (add-one) smoothing to all frequencies. This is critical to avoid zero probabilities in the denominator of the KL Divergence formula, ensuring the metric is always well-defined.

For instance, in a 5 element example we may end up with the following frequencies:

In this case we have the frequency probabilities:

by applying Laplace smoothing:

where:

4. Compute KL Divergence: For each iteration t, compute the KL Divergence of the Reflection distribution q(t) from the Single-Pass prior p:

Interpretation and Monitoring

By plotting KL​(q(t)|p) against each iteration step t, we can directly observe the refinement process. We would expect to see the KL divergence at some point to reach a minimum. This minimum represents the optimal point of refinement, the iteration at which the agent has extracted the most new information without beginning to overfit or degrade the quality of the graph.

This provides a principled, data-driven stopping criterion and a far more interpretable measure of the agent’s progress than absolute entropy. More reliable monitoring thresholds can also be derived.

The Next Frontier

The work in FinReflectKG is more than just academic exercises. It represents the blueprint for the next generation of AI. It shows us how to guide these emergent powerful models from being fluent statistical parrots toward becoming disciplined, verifiable reasoners.

This is the work of building a true foundation for intelligence , grounded in a symbolic source of truth. It is the next frontier toward the next generation of intelligent systems.

References

[1] Original paper from arXiv and from Hugging Face.

[2] Open source KG dataset from Hugging Face.


A Look at FinReflectKG: AI-Driven Knowledge Graph in Finance was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

How Soft Tokens Are Making AI Models 94% More Diverse at Reasoning

Meta’s breakthrough lets language models think in continuous concepts instead of discrete words with zero computational overhead

Month in 4 Papers (September 2025)

This series of posts is designed to bring you the newest findings and developments in the NLP field. I’ll delve into four significant…

How AI+me Vibe Coded My First Python Library in < 1 hour

My First Open Source Repo: AutoCRUD with Python, Gemini & Firebase Studio

Beyond ChatGPT: 8 AI Model Types That Are Shaping 2025

Picture this: You want to build an AI that sees images, reads text within them, segments specific objects, answers questions about what it…

The Complete Open-Source AI Agent Stack: From Zero to Production

Ever spent weeks trying to build a functional AI assistant, only to find yourself drowning in outdated tutorials and deprecated APIs…

The 3-Level Prompting System That Transforms AI Into Your Ultimate Thinking Partner

Have you ever asked ChatGPT a question and gotten back something that felt… meh? Like you knew the AI could do better, but you just…

Building Python Automation Systems That Saved Me Months of Work

How I streamlined data, reports, and workflows into efficient pipelines

Building Smarter APIs with Python

How I streamlined backend workflows using FastAPI and async programming

Multimodal AI Is Just Tensor Algebra: The Linear Algebra Truth Behind Vision-Language Models

The Mathematical Symphony That Powers Billion-Dollar AI Systems


Eulerian Melodies: Graph Algorithms for Music Composition

Conceptual overview and an end-to-end Python implementation

The post Eulerian Melodies: Graph Algorithms for Music Composition appeared first on Towards Data Science.