> Open source creates a useful urgency: when your code is public, you assume it will be examined closely, so you invest earlier and more aggressively in finding and fixing issues before attackers do.<p>This should be the mentality of every company doing open source.Great points made.
> <i>I want to be fair to Cal.com here, because I don’t think they’re acting in bad faith. I just think the security argument is a convenient frame for decisions that are actually about something else. […] Framing a business decision as a security imperative does a disservice to the open-source ecosystem that helped Cal.com get to where they are.</i><p>That sure sounds like bad faith to me.
> <i>Large parts of it are delivered straight into the user’s browser on every request: JavaScript, …</i><p>Ooh, now I want to try convincing people to return from JS-heavy single-page apps to multi-page apps using normal HTML forms and minimal JS only to enhance what already works without it—in the name of <i>security</i>.<p>(C’mon, let a bloke <i>dream</i>.)
There are a lot of things to hate in the Web3 world. Lack of back button form resubmission or redirect loops is a strange thing to dislike though.
The web has grown so hostile lately that javascript is honestly not safe or useful anymore. the only thing it's used for is serving ads and trackers and paywalls, if i can't read a website with no script enabled it's not meant for me and im just not reading it.
I concur that most web sites could use less JavaScript. And a lot of (but not all) cosmetic uses for JavaScript can be done in CSS.<p>Of course for web apps (as distinct from web sites) most of what we do would be impossible without JavaScript. Infinite scrolling, maps (moving and zooming), field validation on entry, asynchronous page updates, web sockets, all require JavaScript.<p>Of course JavaScript is abused. But it's clearly safe and useful when used well.
This article raises a lot of good points that strengthen the argument against keeping models away just because they're "too powerful". I remain disappointed to see AI corporations gloating about how powerful their private models are that they're not going to provide to anyone except a special whitelist. That's more likely to give attackers a way in without any possibility for defense, not the other way around.
I think the "too powerful" is a convenient half-truth that also helps with marketing, and more importantly keeps the model from being distilled in the short term. They'll release it "to the masses" after KYC or after they already have the next gen for "trusted partners".