I've seen this at so many startups (and worked to patch the gaps and put in best practices) including those backed by top tier VCs. The problem is that it is rare for startups to have security minded people.<p>It's usually designers, people who can raise money, and generalists who can stitch together apis. It's not generally platform, db, or security minded people. The proliferation of things like vercel and supabase have exacerbated this.<p>So you get people deploying API keys client side and dbs without rls. Or deploying service keys client side when they should be anon. I mean really basic stuff.
> So you get people deploying API keys client side and dbs without rls. Or deploying service keys client side when they should be anon. I mean really basic stuff.<p>Claude Code will do this, and actively encourage bypassing any verification before pushing to prod. I saw that first hand with its attempted handling of a major CIAM provider, and then Vercel using whatever OAuth provider in the ol' transitive breach<p>That is common knowledge now, right? Or am I just smoking yellow tops
Yep, this has been my experience over 15 years in startups as well. There are barely any punishments, so there is no incentive for startups to change how they operate.
You could even say they're paid even more to "move fast and break things".
Same here. I've witnessed horrifying security bugs that were basically flagged as WONTFIX internally because it was too much work to fix until it was exploited.
I used to work at a startup that handled medical records. A HIPAA breach would have wiped out the company through reputation damage — because our customers were also subject to HIPAA and couldn't possibly hire a startup with a track record of HIPAA breaches.<p>In my personal assessment some individuals within leadership at this startup were highly risk-tolerant. I speculate that had those individuals been in leadership at other companies not subject to HIPAA, security practices would have been as lax and irresponsible as what's being described as the norm in this thread.<p>However, because of HIPAA, security practices at this company were fair-to-middling. There were certainly weak areas and mindless box-checking a la SOC-2, but it wasn't a complete shitshow. Those of us in the engineering deparment who cared were able to raise concerns and not have them dismissed, and were generally allowed to do things the right way.<p>My takeaway: when there are actual severe penalties for privacy breaches, startups may not be so cavalier with your data.
More often than not security minded people are encouraged to focus on things that get the product to market faster instead.
In your opinion, is the lack of attention on security due to speed-bias or not having the expertise? For a startup / sole entrepreneur with very limited resources, what would be your advice?
IME it's always lack of experience, at least at the level being described here. It's the same kind of person adding CORS handling to a pure backend service for "security" reasons. They just don't know any better and don't have a good enough mental model of how it all fits together to be able to recognize when they need to research more. The insecure patterns being chosen instead usually aren't even easier or faster to implement.<p>I don't have any concrete recommendations other than that one really good senior+ engineer is more important than a legion of juniors early on. Basic security doesn't require an extra hire; it requires somebody experienced enough to build your product right.
Yeah, in most cases these security vulnerabilities are also regular bugs too.<p>I'll bet at some point someone contact this company and said "hey I'm being shown the wrong course" or "I can't access the material I just uploaded."<p>I've never seen anyone who got the basics right compromised because of some esoteric security issue. I'm sure it happens and probably will happen more now that it can be automated but it's usually a case of a system being left wide open.
Yeah what was said below. Lack of experience. A lot of people just don't know to ask about it or think through data flows. Running your code base through an llm asking it to act as a l7 security auditor, take it's time, think from first principles, and look for data leaks and potential security gaps in the code and architecture is a good start. Also don't ignore supabase when it gives you suggestions on things to fix.<p>As a solo entrepreneur you really have to prioritize your time but spending an extra day or two to think through everything using something like Gemini thinking or pro and an llm with an eye on security before you start taking customer data is probably a really good use of your time and you'll learn a thing or three. Just keep asking why and think critically.
off-topic, but I've become quite intrigued with AI pentesting, after being very unhappy with the various pentest firms we've used in the past, that rip us off or do very mediocre tests (of course yeah yeah the really good ones exist but even then they're not going to match the speed at which we are claude coding now).<p>Tried a bunch of open source pentesters, including strix (though we never managed to get strix to actually complete.)
this project called shannon was the only one that we managed to get working reliably and it definitely smoked the output of one of the $10K pentests we did, (we had just discovered shannon after we had gotten the pentest firm's report, so it gave us a good baseline comparison). caveat: this was white box and our pentest firm did greybox, but neverthless I was still very unimpressed by what I got from the pentest firm. $50 vs $10K is not even a comparison lol with far far better results and sent our cto into near heart attack mode.<p>i think the days of pentesting firms are over - especially with mythos/5.5-cyber etc like capability coming into play. very exciting times ahead!
"There was no meaningful organization scoping, no tenant isolation, and no permission check preventing a low-privilege user from accessing other organizations' records."<p>Let me guess though. They are SOC2 and ISO compliant right ?
Finally the AI security startup hustlers will keep the other tech startup hustlers in line. Maybe the era of devastating leaks and total disregard for user privacy will come to an end (doubtful).
Wait until we understand the depth of the current Mythos zero day situation. We already have an overall idea of what’s to come but I don’t think we can grasp the high level implications the vast array of these vulnerabilities have in store for us. I don’t see/ say this in a doomsday-ish way nor the world coming to an end. It will sting a bit but overall it’s way overdue and spells opportunity for all, imo.
[flagged]
Initial take: as vulnerability stories go, this is a pretty boring one; what they have here is a target that was secured largely by the fact that few people knew about it. The most work done in this blog post is establishing that a training platform deployed by DoD might be much more sensitive than the same kinds of applications which are ubiquitous throughout corporate America and which are generally boring targets.<p>The vulnerability itself appears to be something anyone with mitmproxy would have spotted within minutes of looking at the platform; apparently, rotating object IDs worked everywhere in the app, and there was no meaningful authz.<p>It's interesting if AI systems can "spot" these, in the sense of autonomously exercising the application and "understanding" obvious failed authz check patterns. But it's a "hm, ok, sure" kind of interesting.
Tenant scoping is important. Just ask Microsoft, didn't they have one right at bing.com?
Oh, just every Bing user is vulnerable to have all Microsoft data (o365 emails for example) hacked. No biggie.<p><a href="https://www.wiz.io/blog/azure-active-directory-bing-misconfiguration" rel="nofollow">https://www.wiz.io/blog/azure-active-directory-bing-misconfi...</a>
Two questions prompted by this disclosure:<p>1. I didn't see mention of a bug bounty program giving limited authorization. How do independent researchers do this with legal safety? Especially when DoD is involved?<p>2. If a researcher discovered a vulnerability at a DoD contractor, and the contractor didn't seem to be resolving the problem, is there a DoD contact point that would be effective and safe for the researcher to report it?
To answer the first question, a number of veteran independent researchers probably wouldn’t have touched such a system. Plenty of companies will send their lawyers after you if you tell them that you’ve discovered a vulnerability of some sort and wish to responsibly disclose. Even if you do things in good faith, the company has zero reason to assume the best from you and can hold a sword over your head by citing poorly-written laws that lean in their favor regarding computer fraud and abuse.<p>DoD does appear to offer a “Defense Industrial Base - Vulnerability Disclosure Program” for all public-facing DoD/DoW systems.[1] However, this might not include contractor-controlled assets or services. I cannot view the HackerOne page that it redirects to (login is required) to view more details.<p>[1]: <a href="https://www.dc3.mil/Missions/Vulnerability-Disclosure/DIB-Vulnerability-Disclosure-Program/" rel="nofollow">https://www.dc3.mil/Missions/Vulnerability-Disclosure/DIB-Vu...</a>
> How do independent researchers do this with legal safety?<p>In my experience it’s usually foreign nationals from third-world countries doing drive-by beg-bounty testing. Presumably they don’t much consider legality.
> Their initial reply from the CEO: "I would love to hear what the vulnerability is, but I assume you want to get paid for it. Is that the play?"<p>Well that’s pretty damning.
Should have been handled better, but some context is necessary:<p>If your name is associated with a startup in a visible leadership position you will get mass-spammed from people claiming to have discovered critical vulnerabilities in your system. When you engage with them, the conversation will turn into requests to hire them for their services.<p>So the CEO handled it poorly, but it's also not a great choice to withhold the details of the vulnerability in initial contact. If the goal was to get something fixed it should have been included in an easy-to-forward e-mail that could have been sent to someone who could act upon it.<p>Anyone who works with security or bug bounties can tell you that the volume of bad reports was a problem before LLMs. Now that everyone thinks they're going to use LLMs to get gigs as pentesters the volume of reports is completely out of control.
The number of spam "I found a vulnerability" emails you get as a SaaS operator is ridiculous, they never offer any proof of a vuln and just want you to confirm you have a bug bounty program (in which case they'll start scanning afterwards), or to pay ahead of time for the information or they'll threaten to release it.<p>Their response isn't damning to me. It sounds like they just assume they're one of these spammers.
I keep getting emails with the content like: "I found a critical bypass vulnerability in your app what is the appropriate channel to disclose it, and do you have a bounty program?"<p>I tried engaging and replying to them, and it inevitably turns into: "Yeah, we don't actually have the vulnerability, but you are totally vulnerable, just let us do a security audit for you".<p>I have a pre-written reply for these kinds of messages now.
I run bug bounty for a fairly large OSS project and the amount of shitty/bad actor spam/beg bounties etc we get is huge.
Like 95% of the emails to security@ are straight garbage
Yeah, the signal to noise ratio on vulnerability reports is very weak, especially when the initial report withholds any detail.<p>I get tons of these messages too and the ones that do include details are the kind of junk you get from free "website vulnerability scanners" that are a bunch of garbage that means nothing -- "missing headers" for things I didn't set on purpose, "information disclosure vulnerabilities" for things that are intentionally there, etc... You can put google.com into these things and get dozens of results.
From the looks of it, they actually asked for a way to report.
i have even more damning ones.<p>When the "good Samaritan" do not go to the vendor, they go to the client (i.e., they do not contact the DIB company, they contact the Gov agency).<p>I have seen government contractors getting pilloried, losing their livelihood when this happened. And, yes there is always a "quick fix offer" by the "good Samaritan" to the vendor and promised re-assurance to the Gov agency, only if this misguided vendor would go with their solution.<p>It is also not unusual to find out later on, that the identification or even the resource reported on was wrong - but by this time the Gov agency already punished the contractor and the reporting "good Samaritan" is laughing (sometimes to the bank).<p>they can get away with <i>unethical</i> vulnerability disclosure because think of the children, the threat to the nation, grandma off the cliff, and <insert your favorite cliche justification of malfeasance>.<p>Yes, sore subject.
They could sell the next one to an adversary for a lot more money if they're going to act like that.
Yes, there are also many other lucrative illegal activities.
How is it illegal? It’s information available to the public.
If you sell something to someone and they do computer crimes, you're going to have to prove that you couldn't've known that they're a computer crimer.<p>It's the same thing with selling general offensive security tools. You have to proactively make it clear that it's for testing and not criminal use. Otherwise, cops are going to assume you're complicit and make things shitty.
Isn't it also illegal to withhold knowledge of a vulnerability for payment? It sounds like it should fall under some variety of blackmail.
That would be even worse than our already bad system.<p>The system is already pretty bad because vendors underinvest in security, and then to fix it, researchers have to volunteer their time to investigate with no guarantee of payment. If the vendor could force researchers to hand over findings for free, nobody would want to do security research except hobbyists having fun. They're basically signing up for hours of tedious forced labor to explain vulnerabilities to the vendor.<p>I wish there was legislation that allowed the government to fine vendors for security vulnerabilities like this where the amount scales based on how much user data they leaked. And it could function like other whistleblower systems where a researcher who spots a leak can report it to the government and collect 50%. That way, if the vendor says, "We're not paying you," the researcher can turn around and collect the money from fines.
Legality aside there is no market for this really.
I wonder if this is how Handala group recently stole the list of service members.<p>How do people find these vulnerabilities within the immense scope of the whole internet? Are they going around with some kind of generic API scanner that discovers APIs?
Probably based on insider info to some degree; if you already do any sort of work for the DoD, then that tends to help narrow the scope of the search for vulnerable things to exploit.
Yes. <a href="http://shodan.io" rel="nofollow">http://shodan.io</a>
Feels like they were too nice. After 90 days of no response, why not just go full disclosure on them?<p>The CEO seems more interested in insulting people than securing his company’s product.
Yikes, Schemata and that delinquent CEO should be held accountable.
<a href="https://x.com/strix_ai/status/2051361018450948511" rel="nofollow">https://x.com/strix_ai/status/2051361018450948511</a>
Was the app vibe coded?
Would be fascinated to know if this went through competitive procurement or if it was one of those Hegseth “let’s be lethal and ship broken shit to the warfighter” procurements.
a16z = "Andreessen Horowitz", for those not in the know. (The acronym is not expanded in the article. EDIT: OP has fixed the article.)
Would it be possible to stop using aXXb nomenclature within the titles? Some of us aren't hip enough to know what all of them mean.
Andreessen-Horowitz, who most people (and they themselves) refer to as a16z and have the eponymous domain name (a16z.com). They're one of the top VC firms on the planet -- exceedingly relevant to HN audiences and commonly discussed here.
> you'd rather say Andreessen-Horowitz, which is just as arbitrary as a16z<p>Yes. I know Andreessen-Horowitz and I don’t know a16z. Reading the title i thought it will be about the cryptography serialisation specification. Turns out i was mixing it up with ASN.1.<p>> Their website is literally a16z.com<p>I hear now. Before this if pressed i would have guessed that they probably have a website indeed. If you would have twisted my arm my guess would have been andersenhorovitz.com (yup, with the typos. I learned the correct spelling today from your comment.)<p>> exceedingly relevant for the HN audience<p>We contain multitudes.
> Yes. I know Andreessen-Horowitz and I don’t know a16z.<p>So the world needs to adapt to your knowledge instead of you learning to adapt to a often used, and well-known moniker?
They just want to sound technical.
I'll be honest - I was thinking authorization (a11n?) - so I didn't read it closely enough. But despite that, and being on HN from almost the beginning (with a different account I lost the password to), I still didn't know what a16z was, though I do recognize Andreessen-Horowitz.
Opposite for me, I've seen a16z tons of time on HN, and also the domain where sometimes, but the full name would have meant nothing to me.
I didn't either. This is an ancient debate that can never be resolved completely, though — because the articles that HN submissions point to don't follow a style guide and there are always assumptions about audience priors. Best to just resolve it and move on.
Sorry, I come here for hacker content.
apologies, just a vc firm
[flagged]
[flagged]
[dead]