Note di Matteo


#ai

This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.

Ryan Dahl, creatore di Node.js e Deno.

#307 /
23 gennaio 2026
/
15:15
/ #ai

The truth is that we and Elon agreed in 2017 that a for-profit structure would be the next phase for OpenAI; negotiations ended when we refused to give him full control; we rejected his offer to merge OpenAI into Tesla; we tried to find another path to achieve the mission together; and then he quit OpenAI, encouraging us to find our own path to raising billions of dollars, without which he gave us a 0% chance⁠ of success.

[...]

He said that he needed full control since he’d been burned by not having it in the past, and when we discussed succession he surprised us by talking about his children controlling AGI.

OpenAI

#293 /
17 gennaio 2026
/
08:59
/ #ai

ChatGPT Translate, interessante (e gratis):

#286 /
15 gennaio 2026
/
10:35
/ #ai#openai

Siri powered by Gemini

Mi chiedo se il fatto che Siri sarà powered by Gemini si noterà nel concreto. I due principali (o unici) sistemi operativi mobile avranno assistenti AI entrambi dipendenti da Google. È un potenziale grosso bias (le "conoscenze" interne dell'LLM saranno comuni).

Le opzioni comunque erano: Apple prende la tecnologia da OpenAI, che le vuole fare competizione anche nell'hardware, oppure Apple prende la tecnologia da Google, che già le fa competizione non solo sugli assistenti AI e con Android ma anche sull'hardware. Sarà uno degli ultimi atti di Tim Cook prima della pensione, non so se sarà ben ricordato per questo.

EDIT: sarà distillazione, non fine-tuning.

#285 /
14 gennaio 2026
/
16:33
/ #ai#android#apple#google

Being able to build for reliability, performance, scale, and security will be a highly-prized skill. When [with AI] anyone can generate software that sort of works until it doesn’t, there will be more demand for engineers who produce quality work that always works as expected.

You cannot prompt an AI to create secure, performant code: you need to know what you want, how to validate the nonfunctional requirements, architect the code, and prompt the AI accordingly. You might also need to throw away the AI and get down to writing code or configuration by hand in order to get the details right. Basically, it pays to know when to use your own expertise.

[...]

The good news is that software engineering fundamentals should become more important, the more a team relies on AI to generate code. More code leads to more problems which need to be caught earlier, and dealt with systematically. This is what good software engineering is about, and always has been.

Gergely Orosz

#280 /
12 gennaio 2026
/
20:37
/ #ai

Telegram aggiunge con lunghissimo ritardo il supporto ai chatbot AI con streaming dei token il 31 dicembre, e sia Perplexity che Microsoft Copilot disattivano i relativi bot Telegram. 😅

Nessuna notizia nemmeno dell'arrivo dell'integrazione di Grok su Telegram, deal strambo secondo cui Grok avrebbe dovuto pagare Telegram 300 milioni di dollari oltre a sobbarcarsi i costi di inferenza. Era stato annunciato in pompa magna da Durov, ora migliore amico di Musk, che però aveva smentito un accordo. Che declino Telegram.

#273 /
9 gennaio 2026
/
16:24
/ #telegram#ai

L'azienda dietro Tailwind CSS è in crisi perché con l'AI nessuno compra più il pacchetto di componenti, che era la principale fonte di ricavi per pagare il team:

But the reality is that 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business. And every second I spend trying to do fun free things for the community like this is a second I'm not spending trying to turn the business around and make sure the people who are still here are getting their paychecks every month.

Traffic to our docs is down about 40% from early 2023 despite Tailwind being more popular than ever. The docs are the only way people find out about our commercial products, and without customers we can't afford to maintain the framework. I really want to figure out a way to offer LLM-optimized docs that don't make that situation even worse (again we literally had to lay off 75% of the team yesterday), but I can't prioritize it right now unfortunately, and I'm nervous to offer them without solving that problem first.

[...]

Tailwind is growing faster than it ever has and is bigger than it ever has been, and our revenue is down close to 80%. Right now there's just no correlation between making Tailwind easier to use and making development of the framework more sustainable. I need to fix that before making Tailwind easier to use benefits anyone, because if I can't fix that this project is going to become unmaintained abandonware when there is no one left employed to work on it. I appreciate the sentiment and agree in spirit, it's just more complicated than that in reality right now.

(GitHub)

#272 /
7 gennaio 2026
/
22:53
/ #web-dev#ai

2025: The year in LLMs

Il classico riassunto annuale di Simon Willison sull'AI nell'anno passato. L'indice:

It’s been a year filled with a lot of different trends.

  • The year of “reasoning”
  • The year of agents
  • The year of coding agents and Claude Code
  • The year of LLMs on the command-line
  • The year of YOLO and the Normalization of Deviance
  • The year of $200/month subscriptions
  • The year of top-ranked Chinese open weight models
  • The year of long tasks
  • The year of prompt-driven image editing
  • The year models won gold in academic competitions
  • The year that Llama lost its way
  • The year that OpenAI lost their lead
  • The year of Gemini
  • The year of pelicans riding bicycles
  • The year I built 110 tools
  • The year of the snitch!
  • The year of vibe coding
  • The (only?) year of MCP
  • The year of alarmingly AI-enabled browsers
  • The year of the lethal trifecta
  • The year of programming on my phone
  • The year of conformance suites
  • The year local models got good, but cloud models got even better
  • The year of slop
  • The year that data centers got extremely unpopular
  • My own words of the year
  • That’s a wrap for 2025
#267 /
2 gennaio 2026
/
13:38
/ #ai

Una serie di esperimenti interessanti sul comportamento degli LLM. Il più innocuo: un fine-tuning su nomi di uccelli estratti da libri antichi fa pensare all'LLM di essere in quel periodo storico anche in altri ambiti.

(paper, fonte)

#255 /
22 dicembre 2025
/
17:12
/ #ai


Un benchmark sulle tecnologie moderne di OCR, dove incredibilmente sono gli LLM a vincere in accuratezza specialmente sull'handwriting.

Quello che il benchmark mi pare non consideri del tutto sono le allucinazioni. Se ho un'immagine il cui contenuto è apparentemente testo ma in realtà non è nulla di sensato, gli LLM di solito sputano fuori qualcosa di apparentemente coerente (del testo, magari proprio delle frasi) ma completamente errato.

#245 /
20 dicembre 2025
/
20:28
/ #ai

FunctionGemma

Mini-LLM di Google molto interessante:

Meet FunctionGemma, a specialized version of the Gemma 3 270M model, fine-tuned specifically for function calling and tool use and designed to be fine-tuned further.

Sufficientemente piccolo da poter essere eseguito su smartphone, e in generale forse anche su CPU su un server. È pure multilingua.

#239 /
19 dicembre 2025
/
23:12
/ #ai

We were taught to be clear, logical, and, in a way, predictable. Our sentence structures were meant to be consistent and balanced. We were explicitly taught to avoid the very "burstiness" that ‘detectors’ now seek as a sign of humanity. A good composition flowed smoothly, each sentence building on the last with impeccable logic. We were, in effect, trained to produce text with low perplexity and low burstiness. We were trained to write in precisely the way that these tools are designed to flag as non-human. The bias is not a bug. It is the entire system.

Recent academic studies have confirmed this, finding that these tools are not only unreliable but are significantly more likely to flag text written by non-native English speakers as AI-generated. (And, again, we’re going to get back to this.) The irony is maddening: You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.

I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me.

#232 /
16 dicembre 2025
/
09:41
/ #ai

AWS Bedrock (managed AI inference) perde clienti grossi per carenza di capacità hw e latenza peggiore:

Customers using Anthropic’s Claude models through Bedrock opted to switch to Anthropic’s own platform or Google Cloud because of “ongoing capacity, latency, and feature parity issues,” according to the July AWS document. Companies such as Figma, Intercom, and Wealthsimple were among those migrating their workloads “due to one or several of these challenges.

Thomson Reuters also chose Google Cloud over Bedrock for its CoCounsel AI product after finding AWS’s service was 15% to 30% slower and lacked key government compliance certifications, the document showed.

#225 /
12 dicembre 2025
/
17:27
/ #ai#aws

Mistral vibe

Oui oui baguette 😂

#220 /
10 dicembre 2025
/
21:53
/ #ai#dev

Claude Code $1B ARR

Six months after its release, Claude Code has reached $1B in annual run-rate (ARR) revenue. It took ChatGPT 9 months to get to this milestone after its launch, and 2 years for Cursor. With Claude Code, Anthropic may have set the record for fastest-growing product revenue.

(The Pragmatic Engineer)

#214 /
6 dicembre 2025
/
20:39
/ #ai#anthropic


Chiude Rewind e Limitless esce silenziosamente dai mercati dove la privacy conta qualcosa con un update della privacy policy. Nessuna menzione di chi ha comprato l'hardware e non può più usarlo.

Metodo strano di chiudere un servizio (Rewind) dopo aver promesso non avrebbe chiuso. Era stato promesso anche il cloud e2ee ("we built Confidential Cloud in such a way that only you can decrypt your data. Your employer, we as software providers, and the government cannot decrypt your data without your permission, even with a subpoena to do so."), poi la frase è semplicemente sparita dal sito. C'era la HIPAA compliance, ora non c'è più.

#207 /
5 dicembre 2025
/
16:35
/ #ai

ChatGPT (5.1) è diventato/tornato più colloquiale ("per evitare mille ... ovunque"):

#206 /
5 dicembre 2025
/
15:13
/ #ai#openai

It seems so rude and careless to make me, a person with thoughts, ideas, humor, contradictions and life experience to read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.

Pablo Enoc in It's insulting to read your AI-generated blog post

#194 /
2 dicembre 2025
/
17:18
/ #ai

Pagina 1 di 3 Successiva →