Note di Matteo


#reti

Andrea Ayer in Why IP Address Certificates Are Dangerous and Usually Unnecessary spiega perché i certificati per indirizzi IP sono poco sicuri. Per via della rapida intercambiabilità degli IP in ambienti cloud e delle regole di validazione della proprietà dell'indirizzo molto allentate, è relativamente facile per un attaccante disporre di un certificato valido per un indirizzo IP che non è più autorizzato a rappresentare.

The basic security property provided by a certificate is that the certificate authority has validated that the certificate subscriber (the person who applies for the certificate and knows its private key) is authorized to represent the domain name or IP address in the certificate. This ensures that the other end of a TLS connection is truly the domain or IP address that you want to connect to, not a MitM impostor.

But the validation is not done every time a TLS connection is established; rather, it was done at some point in the past. Thus, the certificate subscriber may no longer be authorized to represent the domain or IP address.

How old might the validation be? As of February 2026, certificate authorities are allowed to issue certificates that are valid for up to 398 days. So the validation may be 398 days old. But it gets worse. When issuing a certificate, CAs are allowed to rely on a validation that was done up to 398 days prior to issuance. So when you establish a TLS connection, you may be relying on a validation that was performed a whopping 796 days ago. You could be talking not to the current assignee of the domain or IP address, but to anyone who was assigned the domain or IP address at any point in the last 2+ years.

È un problema che c'è evidentemente anche con i domini, ma lo spazio dei nomi di dominio è molto più grande di quello degli IPv4 e quindi il problema non è di fatto un problema:

This is a problem with both domains and IP addresses, but it's way worse with IP addresses. While it's still very possible to register a domain that no one has ever registered before, you don't have this luxury with IPv4 addresses. There are no unassigned IPv4 addresses left; when you get an IPv4 address, it has already been assigned to someone else.

Questa vulnerabilità si ridurra assieme alla riduzione della durata massima dei certificati (47 giorni + 10 giorni di periodo di validazione nel 2029). Nel frattempo si può consultare o monitorare i log di trasparenza (es. crt.sh) per vedere quali certificati sono stati emessi per un indirizzo IP o un dominio.

#355 /
22 febbraio 2026
/
10:22
/ #reti#security#cloud

TIL su github-debug.com si può eseguire uno speed test verso diversi domini di GitHub per verificarne le prestazioni (GitHub ha ancora un suo AS e suoi datacenter anche se sta migrando verso Azure).

#334 /
6 febbraio 2026
/
14:44
/ #github#reti

Deutsche Telekom con le sue pratiche a discapito della qualità di Internet vista da un dipendente di aziende di streaming video:

Commenting from my alt to avoid doxxing myself. Have spent over a decade in various 'large' streaming video companies, the ones you absolutely know about today.

DTAG is bar none the worst ISP to work with. Everything they do is politics, they may decide to 'forget' to increase the bandwidth on a PNI until you take a meeting with german regulators. Almost every other ISP views PNI as the best way to uphold customer satisfaction without breaking the bank over a more expensive IX and will happily add ports when needed, DTAG on the other hand often requires concessions and selective agreements with a lot of strings attached.

I don't think Germans realize just how much DTAG is holding the experience back for end users (given it's partially state-owned)

#310 /
26 gennaio 2026
/
11:04
/ #tlc#reti

IP Addresses through 2025 è una mega analisi di Geoff Huston, guru della Internet australiana, sullo stato di allocazione delle risorse IP, sia IPv4 che IPv6.

#298 /
21 gennaio 2026
/
21:02
/ #ipv6#reti

ttl è un mtr con qualche funzione in più, tra cui enrichment dei dati ASN e DNS:

Key features of ttl include:

• Multi-flow probing – Enumerates all load-balanced paths (no more seeing just one path through ECMP).

• Path MTU discovery – Pinpoints exactly where fragmentation kills your jumbo frames.

• NAT detection – Reveals when a middlebox is quietly rewriting your source ports.

• Route flap alerts – Catches BGP instability as it happens by detecting path changes in real time.

• PeeringDB integration – Identifies which Internet Exchange (IX) you're crossing in your route.

• MPLS label visibility – Exposes provider LSP paths by decoding MPLS labels from ICMP responses.

• Smart loss detection – Distinguishes real packet loss from routers simply rate-limiting ICMP replies.

• Modern TUI – Features live stats, jitter calculation, ASN/GeoIP enrichment, and a sleek terminal UI (no 1990s look-and-feel).

• Scriptable output – JSON and CSV export for automating analysis or proving that yes, the problem is their network.

(LinkedIn)

#294 /
17 gennaio 2026
/
20:38
/ #reti


It’s always TCP_NODELAY

C'è uno storico problema di TCP che torna periodicamente nelle discussioni online (quella di oggi): il modo in cui l'algoritmo di Nagle e gli ACK ritardati interagiscono causando latenza aggiuntiva non necessaria. Nello specifico:

  • L'algoritmo di Nagle ritarda la trasmissione di dati da parte del client finché ci sono dei dati non confermati, con l'idea di ridurre l'overhead dell'header TCP/IP. Ad esempio se scrivo una lettera in un terminale remoto i dati da trasmettere sono pari a 1 byte, ma gli header sono decine di byte.
  • Gli ACK ritardati agiscono dall'altro lato della connessione ritardando appunto gli ACK se si ritiene che ci saranno a breve (es. 200ms) dati di risposta da inviare (piggybacking sull'ACK).

Il risultato è questo:

The interaction between these two features causes a problem: Nagle’s algorithm is blocking sending more data until an ACK is received, but delayed ack is delaying that ack until a response is ready.

Da qui la "proposta" di un ingegnere AWS di disattivare Nagle praticamente sempre, e quindi attivare l'opzione TCP_NODELAY sui socket oppure a livello di sistema operativo:

First, the uncontroversial take: if you’re building a latency-sensitive distributed system running on modern datacenter-class hardware, enable TCP_NODELAY (disable Nagle’s algorithm) without worries. You don’t need to feel bad. It’s not a sin. It’s OK. Just go ahead.

More controversially, I suspect that Nagle’s algorithm just isn’t needed on modern systems, given the traffic and application mix, and the capabilities of the hardware we have today. In other words, TCP_NODELAY should be the default.

#257 /
23 dicembre 2025
/
14:12
/ #reti

Ottimizzazioni di un'altra era nell'app Facebook:

In 2012 we took this wild ride at mobile infra at Facebook when trying to reduce the several-seconds long load time for “Newsfeed”. A few people worked on different approaches. Something we quickly realized was that setting up a connection with TCP and TLS was incredibly slow on mobile networks at the time. The fix was to have just one, keep it alive and multiplex. Shaved a whole second off. But it was still slow. Several people were convince that us sending JSON was the problem, so two different teams started to work on compact binary encoding. After a lot of experimentation what actually worked out best was to send JSON with ordered fields and a compile-time generated parser. Turns out both our iOS and Android app would do something silly like: 1) read all JSON data from server into a buffer, 2) decode that buffer with a generic JSON decoder into lists & dicts, 3) traverse those structures and build the final struct/class tree. Oh and another neat thing we eventually did—when the network connection needed to be setup—was to send an optimistic UDP packet to the server saying “get started fetching data for the following query”; once the real connection was established, TLS handshake completed and user session authenticated, the response was already ready to be sent back.

#198 /
3 dicembre 2025
/
23:39
/ #dev#reti

Di recente Google ha cambiato la policy di peering per quanto riguarda Google Cloud e YouTube:

  • Niente più nuovi peering negli IXP pubblici.
  • PNI min. 100 Gbps su traffico minimo di 10 Gbps di picco (settlement free).
  • Per tutto il resto il consiglio è di affidarsi a uno degli ISP "certificati" di questa lista (IP transit a pagamento, presumo), che hanno almeno due interconnessioni in almeno un'area metropolitana di Google.

(video)

Private peering allows a network to connect directly with Google over a dedicated physical link known as a private network interconnect (PNI).

Google offers 100G and 400G private peering (PNI) at the facilities listed in our PeeringDB entry. Note that this type of direct peering occurs at common physical locations, and both Google and any peering network bear their own costs in reaching any such location.

Google no longer accepts new peering requests at internet exchanges (IXPs). However, Google maintains dedicated connectivity to the internet exchanges (IXPs) listed in our PeeringDB entry. We also maintain existing BGP sessions across internet exchanges where we are connected. For networks who do not meet our PNI requirements Google will serve those networks via indirect paths.

(peering.google.com)

#169 /
18 novembre 2025
/
22:42
/ #reti#google


In Francia i provider DNS devono bloccare domini su richiesta anche in assenza di una sentenza (ricorda qualcosa...):

We sought legal advice, and unfortunately discovered that French law, specifically Article 6-I-7 of the Loi pour la Confiance dans l'Économie Numérique (LCEN), might actually require us to respond and apply blocking measures, at least for French users.

That said, this whole situation shows just how inadequate this regulation is. Such decisions should be made by a court — a private company shouldn’t have to decide what counts as “illegal” content under threat of legal action.

(Adguard)

#155 /
16 novembre 2025
/
10:46
/ #dns#reti#legal

Anche Vodafone DE adotta la strategia del depeering, sulla scia di Deutsche Telekom:

By the end of 2025, Vodafone will have completely withdrawn from every public internet exchange in Germany, including DE-CIX Frankfurt, the largest internet exchange on the planet. Instead, all traffic will flow through a single company called Inter.link, which possibly will charge content providers based on how much data they send to Vodafone customers. It might be the telecom equivalent of a landlord announcing they're demolishing all the sidewalks in town and replacing them with a private toll road.

[...]

Think about that: you pay Vodafone for internet access. YouTube pays Inter.link for the privilege of serving you. Both ends pay, but the service you receive gets worse because the architecture degrades and bottlenecks concentrate through fewer connection points. Vodafone saves money on operational overhead while extracting new revenue from content providers. You, the customer, subsidize this twice and get a degraded product.

[...]

You'll have a two-tiered internet: fast lanes for services that pay, slow lanes for everything else. [...] When you pay Vodafone for internet service, you think you're buying neutral access to the global internet. You're not. You're buying access to Vodafone's network, and Vodafone controls how well that network connects to everything else.

Dall'ottimo articolo di Coffee.link che spiega bene il contesto e il precedente di DT e relativi notevoli effetti sulla qualità di Internet.

#134 /
8 novembre 2025
/
14:50
/ #reti

Numero di utenti per indirizzo IP nel mondo:

E l'uso di CGNAT, così come rilevato da un algoritmo di Cloudflare:

(Cloudflare)

#122 /
30 ottobre 2025
/
11:52
/ #reti