Note di Matteo


16 dicembre 2025

GitHub Actions

Mega refactoring di GitHub Actions:

In early 2024, the GitHub Actions team faced a problem. The platform was running about 23 million jobs per day, but month-over-month growth made one thing clear: our existing architecture couldn’t reliably support our growth curve. In order to increase feature velocity, we first needed to improve reliability and modernize the legacy frameworks that supported GitHub Actions.

The solution? Re-architect the core backend services powering GitHub Actions jobs and runners.

Since August, all GitHub Actions jobs have run on our new architecture, which handles 71 million jobs per day (over 3x from where we started). Individual enterprises are able to start 7x more jobs per minute than our previous architecture could support.

Nuovi prezzi più bassi:

Ma compare una fee di $0.002/min per i self-hosted runner.

I provider alternativi di hosted runner provano a spinnarla positivamente. Blacksmith.sh:

In the past, our customers have asked us how GitHub views third-party runners long-term. The platform fee largely answers that: GitHub now monetizes Actions usage regardless of where jobs run, aligning third-party runners like Blacksmith as ecosystem partners rather than workarounds.

Depot invece l'ha presa male.

#234 /
21:15
/ #github

PAGAMENTO ONLINE Il pagamento potrà essere effettuato entro la mezzanotte del giorno prima della visita.

Cioè? Fino al giorno prima o fino a due giorni prima? Perché scrivere in modo evidentemente ambiguo?

#233 /
09:44
/ #scrivere

We were taught to be clear, logical, and, in a way, predictable. Our sentence structures were meant to be consistent and balanced. We were explicitly taught to avoid the very "burstiness" that ‘detectors’ now seek as a sign of humanity. A good composition flowed smoothly, each sentence building on the last with impeccable logic. We were, in effect, trained to produce text with low perplexity and low burstiness. We were trained to write in precisely the way that these tools are designed to flag as non-human. The bias is not a bug. It is the entire system.

Recent academic studies have confirmed this, finding that these tools are not only unreliable but are significantly more likely to flag text written by non-native English speakers as AI-generated. (And, again, we’re going to get back to this.) The irony is maddening: You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.

I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me.

#232 /
09:41
/ #ai

14 dicembre 2025



pg_repack. pg_repack is a PostgreSQL extension which lets you remove bloat from tables and indexes, and optionally restore the physical order of clustered indexes. Unlike CLUSTER and VACUUM FULL it works online, without holding an exclusive lock on the processed tables during processing. pg_repack is efficient to boot, with performance comparable to using CLUSTER directly.

#229 /
11:00
/ #database

Postmortem di Railway, la creazione di un indice PostgreSQL ha tirato giù tutto:

A routine change to this Postgres database introduced a new column with an index to a table containing approximately 1 billion records. This table is critical in our backend API’s infrastructure, used by nearly all API operations.

The index creation did not use Postgres’ CONCURRENTLY option, causing an exclusive lock on the entire table. During the lock period, all queries against the database were queued behind the index operation. [...] Manual intervention attempts to terminate the index creation failed.

Le misure:

We’re going to introduce several changes to prevent errors of this class from happening again:

  • In CI, we will enforce CONCURRENTLY usage for all index creation operations, blocking non-compliant pull requests before merge.
  • PgBouncer connection pool limits will be adjusted to prevent overwhelming the underlying Postgres instance's capacity.
  • Database user connection limits will be configured to guarantee administrative access during incidents, ensuring maintenance operations remain possible under all conditions.

Let's Encrypt compie 10 anni

A conspicuous part of Let’s Encrypt’s history is how thoroughly our vision of scalability through automation has succeeded.

In March 2016, we issued our one millionth certificate. Just two years later, in September 2018, we were issuing a million certificates every day. In 2020 we reached a billion total certificates issued and as of late 2025 we’re frequently issuing ten million certificates per day. We’re now on track to reach a billion active sites, probably sometime in the coming year.

(LE)


12 dicembre 2025

TIL il font Inter ha delle varianti per risolvere le ambiguità, come la lettera elle che si confonde con la i:

font-feature-settings: "ss02";
#226 /
21:02
/ #design

AWS Bedrock (managed AI inference) perde clienti grossi per carenza di capacità hw e latenza peggiore:

Customers using Anthropic’s Claude models through Bedrock opted to switch to Anthropic’s own platform or Google Cloud because of “ongoing capacity, latency, and feature parity issues,” according to the July AWS document. Companies such as Figma, Intercom, and Wealthsimple were among those migrating their workloads “due to one or several of these challenges.

Thomson Reuters also chose Google Cloud over Bedrock for its CoCounsel AI product after finding AWS’s service was 15% to 30% slower and lacked key government compliance certifications, the document showed.

#225 /
17:27
/ #ai#aws



Sull'architettura di GitHub:

The current architecture is indeed suboptimal. We are in the process of decoupling the monolith and now about to accelerate an incremental migration to a modern frontend stack. This will allow us to have higher velocity and better DX. I’ll post more soon when we officially get started.

The current problem is that we are not fully migrated yet to azure + the rails app calls out to a react rendering service in a waterfall. Then there are then quite a few data and client side react paradigms (react router, a custom router, relay, and some react query more recently).

In new arch, we’ll have decoupled modern frontend with parallel data fetching and move from styled components to tailwind

(Jared Palmer)

#222 /
10:00
/ #github

11 dicembre 2025


10 dicembre 2025


RFC 3339 vs ISO 8601

Bel sito:

#219 /
21:27


In fondo anche la mia sorpresa di fronte a quanto fosse ben organizzato il mio ospedale era dovuta al fatto che anni di racconto in negativo del Servizio pubblico nazionale avevano avuto presa anche su di me. Ma, certo, io sono stata fortunata e a volte le cose deragliano dal percorso previsto. Rimane che il successo di un’operazione chirurgica, come quello di una serie, sta anche nella disponibilità del paziente e dello spettatore di fidarsi e affidarsi. Per questo chi mina la fiducia nella scienza e nella possibilità di una narrazione complessa dei fatti, mina le basi della società.

Stefania Carini in Siamo spettatori anche quando siamo pazienti

#217 /
13:28
/ #italia

8 dicembre 2025

Vercel ha pagato 750mila dollari di bug bounty per 15 bypass WAF contro React2Shell durante il weekend.

#216 /
21:06
/ #security

7 dicembre 2025

Dati Black Friday di Shopify:

This Black Friday Cyber Monday, the scale of global commerce surged. At peak, we processed 11TB of logs per minute.

Shopify’s edge (post-CDN) averaged 312 million requests per minute across BFCM, peaking at 489 million requests per minute.

At peak, our global Kubernetes fleet ran over 3.18 million CPU cores.

Powered largely by MySQL 8, our database fleet sustained 53.8 million queries per second and 4.28 billion row operations per second at peak 🌐

Kafka + Flink powered real-time experiences for merchants and buyers.

Flink processed over 150 MB per second and streaming analytics latency improved 103x since BFCM 2024, supercharged by our migration to Flink SQL.

Our CDN [Cloudflare] served 183 million requests per minute, with 97.8% from cache for fast responses. At peak, we ran 23.2 million async jobs per minute.

(Shopify Engineering)

→ Merchants’ sales globally were $14.6 billion, up 27% from last year

→ 81 million shoppers bought from Shopify-powered brands

→ 15,800+ entrepreneurs made their first sale

→ 136+ million packages tracked in the Shop App

→ 2.2 trillion edge requests

→ Processed and served 90 PB of data from our infrastructure

→ Handled 14.8 trillion database queries and 1.75 trillion database writes

(Tobi Lutke)

#215 /
10:56
/ #cloud