Nei Dev Tools di Chrome sarà possibile applicare il throttling solo a una richiesta specifica, utile!
Nei Dev Tools di Chrome sarà possibile applicare il throttling solo a una richiesta specifica, utile!
oklch.fyi. Ottima spiegazione dello spazio colore OKLCH in ascesa nello sviluppo web ultimamente.
Interessante 0github.com:
To try it, replace github.com with 0github.com in any GitHub pull request url. Under the hood, we clone the repo into a VM, spin up gpt-5-codex for every diff, and ask it to output a JSON data structure that we parse into a colored heatmap.
TIL le build del toolchain Go sono riproducibili al byte:
They made it so every version of Go starting with 1.21 could be easily reproduced from its source code. Every time you compile a Go toolchain, it produces the exact same Zip archive, byte-for-byte, regardless of the current time, your operating system, your architecture, or other aspects of your environment (such as the directory from which you run the build).
At Netflix, we’d often find that backend services had slow memory leaks, which took a long time to discover and fix because instances rarely lived longer than 48 hours, due to autoscaling policies. If we had chosen to focus on memory leaks instead of autoscaling, Netflix would have been unable to scale to meet demand, and would’ve been a much smaller business.
Matthew Hawthorne, ex ingegnere Netflix.
GitHub ha un nuovo capo prodotto, Jared Palmer, ex Vercel, e dice che GitHub implementerà le stacked PR/stacked diff (stile Graphite.dev):
RE: Stacked Diffs on @GitHub
After discussion w @ttaylorr_b, we can implement stacked PRs/PR groups already (in fact we kind of do with Copilot) but restacking (automatically fanning out changes from the bottom of the the stack upwards) would be wildly inefficient. To do it right, we need to migrate @GitHub to use git reftables instead of packed-refs so that multi-ref updates / restacking will be O(n) instead of ngmi.
This will take some time but has been greenlit.
Molto interessante!
How to win at CORS. Una buona risorsa completa per capire CORS (Cross-Origin Request Sharing).
How TimescaleDB helped us scale analytics and reporting. Recente articolo molto interessante di Cloudflare su TimescaleDB, estensione PostgreSQL e alternativa molto più semplice a ClickHouse per gestire dati di analytics, quando PostgreSQL semplice non ci sta più dietro.
GitHub Pages, our static site hosting service, has always had a very simple architecture. From launch up until around the beginning of 2015, the entire service ran on a single pair of machines (in active/standby configuration) with all user data stored across 8 DRBD backed partitions. Every 30 minutes, a cron job would run generating an nginx map file mapping hostnames to on-disk paths.
There were a few problems with this approach: new Pages sites did not appear until the map was regenerated (potentially up to a 30-minute wait!); cold nginx restarts would take a long time while nginx loaded the map off disk; and our storage capacity was limited by the number of SSDs we could fit in a single machine.
Despite these problems, this simple architecture worked remarkably well for us — even as Pages grew to serve thousands of requests per second to over half a million sites.
Hailey Somerville sul blog GitHub.
Lavorare con Safari è una pena:
Defunte definitivamente quasi tutte le iniziative di Chrome per uccidere i cookie di terze parti (Privacy Sandbox), restano solo i cookie partizionati e poco altro:
CHIPS and FedCM, which improve cookie privacy and security and streamline identity flows respectively, have seen broad adoption, including support from other browsers. We'll continue to support those APIs and evaluate opportunities for future enhancements. We'll also maintain Private State Tokens and explore additional approaches to help developers reduce fraud and abuse.
After evaluating ecosystem feedback about their expected value and in light of their low levels of adoption, we've decided to retire the following Privacy Sandbox technologies: Attribution Reporting API (Chrome and Android), IP Protection, On-Device Personalization, Private Aggregation (including Shared Storage), Protected Audience (Chrome and Android), Protected App Signals, Related Website Sets (including requestStorageAccessFor and Related Website Partition), SelectURL, SDK Runtime and Topics (Chrome and Android). We will follow Chrome and Android processes for phasing out these technologies and share updates on our developer site.
TIL in PostgreSQL ALTER DEFAULT PRIVILEGES si applica solo agli oggetti creati dal ruolo che ha creato i default privileges. Di default è il ruolo attualmente connesso, ma si può specificare:
ALTER DEFAULT PRIVILEGES
FOR ROLE prod_dmarcwise
IN SCHEMA public
GRANT SELECT ON TABLES TO prod_pgdump;
Un po' di risorse sulle tecniche moderne per la protezione da CSRF:
Sec-Fetch-Site:
TIL Argon2 per le password non è più sicuro di bcrypt come si pensa:
[...] Me (@jmgosney) and @Sc00bzT were both on the experts panel for the Password Hashing Competition, and both of us will tell you not to use Argon2 for password hashing. It is weaker than bcrypt at runtimes < 1000 ms.
Yep, we basically completely failed. We set out to identify The One True PHF and instead we selected yet another KDF. We also placed way too much emphasis on "memory hardness" when we should have been emphasizing "cache hardness."
Bottom line, if you're already using argon2, you're totally fine. It's still a good PHF and much better than most everything out there. But if you aren't using argon2, bcrypt is a better choice.
Cache-Control for Civilians è un classico must-read (ancora attuale e ancora aggiornato) per chi vuole imparare una volta per tutti gli header per gestire la cache HTTP.
TIL ora JavaScript ha una funzione nativa groupBy() e non serve più usare reduce(). Mini guida qui.
Don't Sleep on AbortController. Introduzione a come annullare operazioni in JavaScript.
Vite: The Documentary: un documentario su Vite con molte persone chiave dell'ecosistema JavaScript, interessante!
Ho trovato questa recente iniziativa, HTTP/1.1 must die, secondo cui il rischio di HTTP smuggling è troppo alto e quindi bisognerebbe migrare verso HTTP/2 per gli upstream nei reverse proxy.
Per contesto (dal paper):
HTTP/1.1 has a fatal, highly-exploitable flaw - the boundaries between individual HTTP requests are very weak. Requests are simply concatenated on the underlying TCP/TLS socket with no delimiters, and there are multiple ways to specify their length. This means attackers can create extreme ambiguity about where one request ends and the next request starts.
HTTP/2 non soffre di questo problema:
HTTP/2 is not perfect - it's significantly more complex than HTTP/1, and can be painful to implement. However, upstream HTTP/2+ makes desync vulnerabilities vastly less likely. This is because HTTP/2 is a binary protocol, much like TCP and TLS, with zero ambiguity about the length of each message.
E il problema si può presentare anche se il client usa HTTP/2, proprio per via dei reverse proxy:
Servers and CDNs often claim to support HTTP/2, but actually downgrade incoming HTTP/2 requests to HTTP/1.1 for transmission to the back-end system, thereby losing most of the security benefits.
Come risolvere:
First, ensure your origin server supports HTTP/2. Most modern servers do, so this shouldn't be a problem.
Next, toggle upstream HTTP/2 on your proxies. I've confirmed this is possible on the following vendors: HAProxy, F5 Big-IP, Google Cloud, Imperva, Apache (experimental), and Cloudflare (but they use HTTP/1 internally).
Unfortunately, the following vendors have not yet added support for upstream HTTP/2: nginx, Akamai, CloudFront, Fastly.
GitHub si sposta su Microsoft Azure:
Vladimir Fedorov, GitHub’s chief technology officer, made the Azure migration announcement internally earlier this week, noting that GitHub is currently struggling with data center capacity. GitHub is currently hosted on the company’s own hardware, centrally located in Virginia. “We are constrained on data server capacity with limited opportunities to bring more capacity online in the North Virginia region,” Fedorov writes in a note to GitHub employees, or GitHubbers as they’re known internally.
To ensure the move to Azure is completed within 12 months, GitHub’s leadership team is asking employees to delay new features in favor of the Azure migration. “We will be asking teams to delay feature work to focus on moving GitHub,” Fedorov says. [...]
GitHub is now aiming to move fully off its own data centers within two years. This gives GitHub 18 months to execute its migration, with a six-month buffer for any delays. Most of the work will be completed over the next 12 months, according to Fedorov.
Magari è la volta buona che abilitano IPv6.