I really don't understand how this is legal. I guess Facebook maybe doesn't actually have any compliance requirements in the USA, but time series screenshots of any SRE's screen are going to contain data that should not be stored by some data vacuum. I know Meta has a reputation for shitty data handling practices and US regulations are light compared to Europe, but how are they planning on securing passwords, encryption keys, PII, etc. ? Can employees turn this off at their discretion? What happens if someone forgets to turn it off before they cat the companywide ssh root private key? Even setting aside legality, someone with access to this training data would have what sounds like an unacceptably broad level of access to company systems unless Facebook wants to get hacked.
[delayed]
This is legal for most businesses under US law, especially on company devices. And unfortunately not unheard of. Compliance with this data is typically handled in the same way you'd handle any data access situation -- by restricting access to the screencaps to a specific group of people.<p>Not that I support it -- but typically companies don't do this in spite of security concerns, they do it to <i>address</i> security concerns. But of course, what meta is doing sounds like a different situation.
This data is going to get leaked in a breach. It will be used against you in a court of law. It will be used for training and (regardless of what anyone says) will be used to fire you once the AI can do your job.<p>And when all of the above happens Meta will be absolved of any responsibility.<p>I don't understand how it's legal either. I guess we need laws against it yesterday.
It doesn't have to get leaked. They can sell it and use it as another means to identify Internet users. Meta is pretty infamous for identifying, tracking, and understanding user behavior. We are kind of past the point where these companies care at all. If you think the push to add age verification to operating systems is an unrelated giggle I envy you. Something something Cambridge analytica.
Yeah, this is crazy, remember when engineers were actually engineers and that meant something? Imagine asking to install spyware on your lawyers' firms' company laptops because you didn't trust them not to make some deal with the judge. Or demanding 24 hour monitoring on everything a doctor does because you need to review the footage at any time.<p>EDIT: While we are here, let's do this for politicians as well :), publicly available, auditable 24-hour surveillance.
> let's do this for politicians as well :), publicly available, auditable 24-hour surveillance<p>Politicians will be the first to carve out exceptions for themselves for reasons of "security" while everyone else is surveilled.<p>Yes, it should literally be the opposite -- with power should come accountability. But that's not how these things work in practice.
Of the examples you listed, politicians are the only ones you directly fund and supposedly work for you. Your lawyers and doctors aren’t your employees, and they also don’t work on your property (though lawyers might handle your documents). The biggest thing this points to is that the mask is almost entirely off between employee-employer relationships in the US, and it looks like by ensuring everyone depended on employment for insurance before turning this corner, there’s not much resistance left.
This is why a worker's rights movement is important. You shouldn't have to rely on your employer's goodwill. Reasonable privacy rights on work equipment should be guaranteed by law, and any large company should have a Euro-style worker's council.<p>The legal environment is the only way to baseline behavior. In countries with strong worker's rights, you generally don't have to fight much to make use of them; it's the norm for management, too. Likewise, the US-style norm of having no expectations toward your employer and the "stay in your lane" type takes rampant in the thread are also symptoms of the environment and its norms.
Notably, I’ve had several lawyers sell me out. It’s not the emails, but the phone calls you need to worry about.
This is going to be a huge chilling factor for employees. You’d no longer be able to disent, or discuss anything non-work related with even the slightest expectation of privacy.<p>Yes they could have accessed logs before but there’s a difference between directed checking after incidents and active surveillance at scale.
Couldn't have happened to a more deserving group of people. My irony detector is sparking so badly I think it's about to blow.
As much as it's funny to dunk on meta this type of surveillance is becoming the norm. Failed start ups are selling all their emails, chats, commits, etc for companies to train on. Most job offers now come with statements about how you don't have right to your likeness, or your personal network I think most people assume that's for photo ops, but ... Yea. I expect more and more of this. products and product features rolling out with this as a focus<p>Companies have shown us that IP going to AI providers is acceptable. Once you cross that line your thought workers are assets not people.
This is a naive take on this. Do you think it stops with just metamates(lmao that’s what they call themselves) being surveilled? Nope. This is the exact type of thing that software IC’s should reject in solidarity. Being happy with BadCompanyX trampling employee expectations directly allows for GoodCompanyY to enact the same policies.
I'm happy to see the metamates (lol) receiving the same pain they inflict on others. Maybe it will teach <i>them</i> a lesson in solidarity.<p>You can't have solidarity about a bad thing with the people who are doing the bad thing! They have to stop doing the bad thing first! That's how solidarity works!
> This is the exact type of thing that software IC’s should reject in solidarity.<p>Yes. Which includes quitting, en masse, from any company that does this.<p>Meta <i>ought to</i> find it impossible to employ anyone with a policy like this.
Maybe in 2010 or 2015, but in 2026? Nobody is quitting their high paying job when the job market is <i>this rough</i>. A bubble has burst and there just are not the tech jobs out there that there used to be.<p>And employers know this, so they are enacting all kinds of draconian policies because they know employees know that they can't just leave the job and also keep their families fed.
If only there was some way where workers in this profession could form some type of JOIN(but like a vertical version?) between different sets of workers, even crossing company boundaries, so that workers could coordinate to ensure that everyone would be quitting at once, and therefore have any power at all to block anti-worker edicts.
job market is 2019 levels this rhetoric is nice, but doesn't stack up. yes it's not 2021 levels which is where they over hired and hired a bunch of people they would not have hired before then.
This really depends on where you are. In the Bay Area it may be 2019 levels, in other parts of the country it is way worse than 2019.
The tech job market was about 2019 levels a year ago. It's materially worse now.
There are large organizations at Meta focused on basic research & design (FAIR, Open Compute, PyTorch, etc) and giving back to the community. Not everyone is maximizing revenue.
I guess Palantir is cool as long as they keep the queer interest group going
There are also large organizations at Meta focussed on the optimal distribution of scam ads to the elderly.<p><a href="https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/" rel="nofollow">https://www.reuters.com/investigations/meta-is-earning-fortu...</a>
Like all of us these people make a cost-benefit analysis when it comes to their choice of employer and how much it suits their purposes and personal priorities like giving back to the community.<p>This is just another factor they’ll have to grapple with in their analysis.<p>I’m sure some of them will find it a bridge too far but not enough to really matter. The work will continue as will the expansion of Meta and the negative externalities that it produces.
I already assume that on a work computer everything I'm doing could be monitored by work IT. At every job I've had, I've made a point of not using work hardware for anything I even remotely thought someone at the job might object to. Instead I use my own hardware for that kind of thing - I own a smartphone, I own multiple computers, this is not hard to do.<p>When I worked at a startup that had some internal conflict between the software engineers and management, someone made a Signal group to chat about the issues among the software engineers privately and everyone joined that group with their own Signal accounts, without any kind of issue.
Yes, but I cannot imagine Meta cares about chilling their employees. They're deep into the "extract more value" phase and are no longer bringing in the cutting edge talent.
Tbh that's to be expected, the work machine is the company's property and there shouldn't be any expectation of privacy.<p>I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.<p>I wonder if this is where they are going.
> A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.<p>Feel like I'm reading a Gibson novel here.
There shouldn't be any expectation of privacy? There absolutely should!
Whether they should or shouldn't, you have to <i>expect</i> that your company has root on your work device or at least some sort of corporate admin profile that gives them access to everything on the device and all attached peripherals. This has been pretty standard at IT / tech companies for as long as I've been in the workforce. I personally wouldn't do anything personal on a work computer, from sending personal E-mails all the way up to storing nudes on it. Why do that when a separate personal computer is cheap and solves the problem entirely?
On a work computer? No there shouldn't and isn't.
This is Stockholm syndrome. Sure, you can enforce zero privacy on work computers, it will just lead to shitty work culture and lowered productivity.
[flagged]
> employee communications are already monitored everywhere<p>proof?<p>> Turns out people actually don't really care about privacy at work<p>lol, won't ask for proof, because it's trivially falsifiable
Ask your IT department what they're tracking and they'll tell you. And yet I assume you still continue to go to work or do not actively seek out non-surveiling companies. By "everybody," maybe iI should clarify that it’s "majority" instead.
As an old hand that's managed many people, I can tell you this is true.
Why not? How about a company-owned toilet? It's their property as well.
You're right, maybe they should put cameras in there too. But there's a reason we don't yet every worker still explicitly or implicitly knows not to use their work computer for personal tasks, as people can and do get fired for doing so.
This is a ridiculous statement. Everyone I know at my company uses work laptops for personal stuff. It's not in the land of freedom though, so great leaders like yourself can't fire people at will.<p>TBH at this point I don't believe you are a real person.
I stopped doing any personal stuff on a work laptop long time ago, like 10+ years ago. There is absolutely nothing on my work laptop which is not work related. Working from home though helps, I always have my laptop next to me. Same with the phone, under no circumstances I will do anything work related on my personal phone (and yes I do have a company provided phone with MDM and etc).
Consider, do they ever go on explicit websites on that computer? No? Because they know that's surveiled while a personal computer for the same purpose is not. As I said, people do know the difference and might do light personal things like googling something unrelated to work but wouldn't do e.g. banking on a work computer. If they do, well, it'll be their fault if they ever get fired for doing so.<p>The fact that you don't believe people who don't share your same opinion on mixing work and personal stuff are somehow not "real" is part of the problem.
Most companies just don't have a reason to look through the computer they're letting you use to do your job. Don't give them a reason.<p>Maximizing shareholder value by observing you doing job in the pursuit of replacing you with a very small shell script is a great reason that they've just discovered.<p>Get your own laptop, pay for your own cellphone, use your own internet service, etc. If you create anything of value on their property or with their property or during times they're paying you in any capacity, expect them to use it for profit.
Exactly, no one is stopping one from using their personal devices for any personal purpose, and the fact that somehow people are defending wanting to do personal things on a work laptop is utterly baffling to me. Like another commenter said, I always grew up with the notion, legal and social, that a company laptop is absolutely not your property and companies can and will look through it. Use your own devices for your own tasks.
People get fired for banking on a work computer? Whaaat, no way
I'm not American or in America, but I wouldn't use a work laptop for anything personal.<p>I mean I have my own laptop and phone, why would I use a work device for that stuff?
> I mean I have my own laptop and phone, why would I use a work device for that stuff?<p>Because you're traveling for work, and carrying two separate laptops eats into your limited baggage size/weight. Things are marginally better now that everything uses the same standard charger, but not much.
Maybe we should also call it labor camp.
That sounds like a truly dystopian take to me, but suppose you're right and nobody should ever use their work computer for anything personal.<p>Per TFA, this thing is literally taking screenshots of what is on the employee's screen. At work my screen sometimes had things such as: performance data on other employees, my own PII from HR systems, PII from customers, password managers, etc. It's also logging keystrokes. How many times do you type passwords a day.<p>Collecting that kind of information on purpose is truly wild. Imagine the security safeguards you would need just to prevent it from leaking. Wait what, they're explicitly collecting it to train LLMs with it? God help us all.
Your screenshots go to your managers, not just anyone in the company. At Meta there are very strict safeguards for preventing employees e.g. stalking their exes, so I'd assume the same security is used for even PII filled images.
In most civilized countries you absolutely do have significant rights to privacy on a work computer.
I spend the majority of my adult life working, and you're telling me I should spend it surveilled?
You already do and your consent is part of your employment. Check your employee handbook, search for things like "data privacy" and understand how <a href="https://www.copyright.gov/circs/circ30.pdf" rel="nofollow">https://www.copyright.gov/circs/circ30.pdf</a> applies in the modern world, especially around AI. TL;DR companies can do whatever they want with your work / observe you and you have no real meaningful recourse.
[flagged]
Im pretty surprised you're getting so much flak for this. This is the least controversial opinion I've seen on HN. I've been working for ~30 years, and every job I've had, if you actually looked at the IT policies, they were all very clear that work devices were for work, personal devices were for personal stuff. It wouldn't even occur to me to cross the streams. Carrying a second phone for personal stuff is a trivial burden.
> every job I've had, if you actually looked at the IT policies, they were all very clear that work devices were for work, personal devices were for personal stuff<p>There's quite a difference between that and zero privacy, and there's also quite a difference between "IT policy says" or "the law permits" and "this is how things ought to be".<p>That said, between necessary endpoint security and the potential to get caught up in corporate legal disputes I feel like maintaining a strict separation is advisable. But that doesn't mean I support unnecessarily invasive surveillance or think it's a good thing.
I'm also very surprised, so much so that one of my comments got flagged for it. Seems like it's a few dissenters while others have mentioned concurring with this fact as I also have always been under the impression that work hardware is for work only. And then some people are talking about how it's authoritarian or anti human, like, it's not that deep.
/facepalm If we're going to debate norms and ethics, sending one liners into cyberspace won't get far. There are better ways. Invest in your conversational skills and listening skills, please. Otherwise you are a moth and HN is a streetlamp.
> the work machine is the company's property and there shouldn't be any expectation of privacy.<p>A bogus argument, methinks. Consider that the company also owns the phones, but can or do they listen to every phone call ?
Wait so the engineers doing novel work are ousted; you fire the engineer that had the skill set to produce the work in the first place? Surely this is creating a Stasi-like neighbour snitching environment with chilling effect where the better you do the faster you become a target for replacement by engineer's incentivized to win points by replacing you. Even being very charitable where the scenario is the code was so poor that the code the employee is working on is so entrenched in domain knowledge they've become a huge bus factor, an LLM is going to make that kind of code worse. I'm struggling to imagine the subset of people this replaces that is not a long term detriment to everyone working there. Those people became "key personnel" for a reason no?
> A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.<p>I know you’re in India, but in the US, could this not be considered intellectual property theft on “right of publicity”? Your persona and working style is one of your core values you bring to market; building a simulacrum of that is not something I expect to be part of the “your output is the company’s IP” in an existing contract.<p>I will give a company the right to try to reproduce my output. But my very likeness and modus operandi? No.
For what it’s worth I heard from a manager in Meta that they are doing this too.
>I will give a company the right to try to reproduce my output. But my very likeness and modus operandi? No.<p>You don't need to "give" them anything -- they already have everything they need due to basically anything you do, especially at work, especially while using company equipment, being legally considered "works made for hire" <a href="https://www.copyright.gov/title17/92chap1.html" rel="nofollow">https://www.copyright.gov/title17/92chap1.html</a> + <a href="https://www.copyright.gov/circs/circ30.pdf" rel="nofollow">https://www.copyright.gov/circs/circ30.pdf</a><p>Here's how a refusal to them doing whatever they think would maximize shareholder value with any of your output or data they collect from your company computer would actually go down: the company would do something you didn't like, you'd try to complain about it, HR would listen and document everything. In the best-possible case, they'd let you personally opt out. More likely, since you're likely very easy to replace in their minds, they'd refer you to their data privacy clauses in their acceptable usage policy section of the employee handbook, maybe reference the notice sent out to everyone about how they're doing this, then fire you for performance reasons a few months later. You'd be given an NDA and a very average severance, then you could choose to try to hire a lawyer (who would take at least a third of any pre-tax settlement amount) and fight them, in which case they'd settle for more or less the same as the severance package (and keep in mind both that and any court settlement are both taxable income, so you're not getting a windfall in any case), or you'd just sign the NDA and take the severance with no admission of wrongdoing on their part and no legal recourse.<p>Large companies employ entire orgs of lawyers who specialize in these matters, and it is literally their job to protect the company, not the employees, from lawsuits like this. Is it fully legal and in the clear? Probably not. Will they still 100% get away with it and leave employees with no realistic options or upside attempting to fight it? Of course. Welcome to America, land of the free for corporations which are legally people, just ones with infinite lives who cannot be arrested / imprisoned but can make legal decisions but cannot be subpoenaed. See eg <a href="https://www.theverge.com/policy/886348/meta-glasses-ice-doxxing-privacy" rel="nofollow">https://www.theverge.com/policy/886348/meta-glasses-ice-doxx...</a> for how the C-suite thinks about this type of thing.<p>Follow eg <a href="https://www.aclu.org/press-releases/aclu-and-75-organizations-sound-alarm-on-metas-plans-to-add-facial-recognition-technology-to-ray-ban-and-oakley-eyeglasses" rel="nofollow">https://www.aclu.org/press-releases/aclu-and-75-organization...</a> to see what actually happens.<p>More on how "work for hire" applies in a legal sense:<p><a href="https://www.brookskushman.com/insights/innovations-at-work-who-really-owns-employee-created-inventions/" rel="nofollow">https://www.brookskushman.com/insights/innovations-at-work-w...</a><p><a href="https://outsidegc.com/blog/common-misconceptions-about-the-work-for-hire-doctrine/" rel="nofollow">https://outsidegc.com/blog/common-misconceptions-about-the-w...</a><p><a href="https://www.law.cornell.edu/wex/work_made_for_hire" rel="nofollow">https://www.law.cornell.edu/wex/work_made_for_hire</a><p><a href="https://crownllp.com/blog/what-is-a-work-for-hire/" rel="nofollow">https://crownllp.com/blog/what-is-a-work-for-hire/</a>
> Is it fully legal and in the clear? Probably not. Will they still 100% get away with it and leave employees with no realistic options or upside attempting to fight it? Of course.<p>I am aware of "how the C-Suite thinks about this type of thing", but this is also a good example to surface here of what to redline in future employment contracts. Yes, that will likely shut you out of a lot of places, but the opposite is beyond learned helplessness: it is capitulation to a future that will not end well for the tech worker.
skills.md heh they serialized you into a config file and used it to boot your replacement. could've at least picked a better extension.
Just speculating, but the intention wasn't reducing key personnel risk. It was so that your employer could fire them and replace them with an agent running off of their associated skills.md.
<i>Tbh that's to be expected, the work machine is the company's property and there shouldn't be any expectation of privacy.</i><p>There remains a thing called human dignity.<p>If a company can't trust the people it hires, that's a fault in the hiring process, not the employees.
No to disagree with you here because I wholly support this position. But I can see the problem from both angles. The problem, it seems to me, is that, and Im not sure which came first, employees started being reckless at work, probably because employers stopped caring about the treatment of their workers, which ramped up the viscous cycle to where we are now.<p>I can see an argument for companies not trusting there employee's because most employees harbor borderline corrupt thinking in their work place and have terrible work ethics, of course all of this is brought on by corporate culture so its there fault in the first place, but im not exactly sure what started where.
><i>we are encouraged to create skills.md based on the traits of our colleagues</i><p>Like that "Scott is an asswipe who never agrees to any idea that isn't his" or what?
>A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.<p>This is exactly what they're doing, and they aren't the only ones.
[dead]
Question: I have heard that at some tech companies that use internal chat software, the general practice is for IT to set it so that the messages are automatically deleted at the end of the day. In Google Chat this is a feature called "turn off history", and the idea behind it is that it can reduce a paper trail when there are investigations into the company doing something that's potentially monopolistic or otherwise shady.<p>If keystrokes are captured, isn't this a double-edged sword where maybe the company might be inadvertently collecting evidence against itself if there's an investigation and the investigators want to collect keystrokes?
Yeah, if at any time Mark can ask Meta AI ‘which of my employees insulted me today’ for example, that’s wild
All enterprise messaging apps support exporting your DMs today, for legal compliance.
He's already got the willing-intern-finder.md skill locked and loaded
I insulted him in my mandatory Exit Interview form from HR when I resigned.<p>It had no impact of recruiters trying to win me back since then.
Until the day when Zuckerberg meets you, and his Ray Ban glasses profile your face and pull up that comment on your exit interview as pertinent information.<p>His eyes glaze over and he just reads that instead in his corner vision instead of listening to
you, and you get snubbed forever more
> I insulted him in my mandatory Exit Interview form from HR when I resigned.<p>How can they legally mandate an exit interview when <i>you</i> resigned? Is it part of the employment contract? What would have happened if you showed them the finger and not participated?
Nothing happens, it’s optional. However if you want to be able to be rehired it doesn’t hurt to do it. It doesn’t take long and you don’t really have to say anything.
They can't legally mandate an exit interview, but they sure can pay you for one.
Possibly nothing, possibly you'd get blacklisted and they'd share that with other companies in ways in which you'd never know or have any recourse <a href="https://fortune.com/2025/03/27/meta-block-list-hiring-employees/" rel="nofollow">https://fortune.com/2025/03/27/meta-block-list-hiring-employ...</a><p><a href="https://www.businessinsider.com/how-block-lists-affect-your-job-search-and-career-advancement-2025-3" rel="nofollow">https://www.businessinsider.com/how-block-lists-affect-your-...</a><p><a href="https://medium.com/@ossiana.tepfenhart/the-no-hire-list-is-real-and-this-might-be-why-youre-not-hired-ca5113218f05" rel="nofollow">https://medium.com/@ossiana.tepfenhart/the-no-hire-list-is-r...</a><p><a href="https://www.theguardian.com/technology/2018/mar/16/silicon-valley-internal-work-spying-surveillance-leakers" rel="nofollow">https://www.theguardian.com/technology/2018/mar/16/silicon-v...</a>
In my experience at other companies recruiters and pretty much no one else has any idea that someone has been blacklisted, until you do all of your interviews and tell HR to hire that person and that's when they tell you the person is on some kind of shit list and we can't hire them. That was an awkward conversation with someone who was basically told we'll be making an offer soon.
Narcissists often want to get the ones that ran away back to properly destroy them.
Should have framed it. Good job.
> You’d no longer be able to disent, or discuss anything non-work related with even the slightest expectation of privacy.<p>When I joined the workforce a long time ago, I went in with the mindset that: Their property, their equipment, their right to monitor (or even keylog).<p>I was pleasantly surprised to find that not to be the case, but I've always believed in their right to do so.<p>Why do people expect to have a right to do non-work related stuff on the job? Every company I've worked for states in the employment contract/policies what you can and cannot do on the job. They never enforce it to the extent that they outline in the policies, but it's usually clear cut.<p>If you want to rant about the company, do it outside the company! Or at a physical water cooler. When coworkers want to rant to me about the company, they don't use Slack/Teams. They message my personal, non-work number.
It's absolutely their right, but it's a dramatic cultural departure from the history of the company.<p>In the late 2010s/pre-covid it was very common for employees to port their personal cell phone number to their work phone and just <i>not have a personal cell phone</i>. The internal culture at the company was remarkably open for their size.<p>That all went away by the time I left in 2022, and from what I've heard it has only accelerated into an employee-hostile environment. I'm not shocked at this move.
While you have the right practical approach, I do believe companies should face harsh regulations preventing this kind of monitoring. It has almost universally negative effects, from enabling union-busting to exploitation to all kinds of discrimination and favoritism.
Engineers build tools for other people. The profession exists in support of human life. We make the substrate that civilization runs on.<p>If humans are the point, this also goes for keeping work environments humane.
1. But they are not paying for your training which you are bringing to the company.
2. About ranting about company, it is difficult to organize. That's why unions existed, and that's why unions were allowed to meet in work hours.
I cannot understand how can anyone hold such outrageously antihuman beliefs.<p>Governments, corporations and any other organizations should all exist FOR the people, not the other way around.<p>American-style capitalism truly is a disease.
There is no clean separation between personal and work. It is also more efficient to blend them (if I expect a baseline level of non-snoopiness on my work computer, I will text my boyfriend from my work laptop... obviously beneficial for the firm).<p>Either way when it comes to ranting about the company: many workplaces don't have a watercooler where all your team mates congregate (e.g. remote/different offices). Also what, you'll rant about confidential work projects over non-work texts?
This comments pairs really well with the song <i>Sixteen Tons</i> - I cued the song[1] and re-read your comment.<p>More substantively: I would like the employer/employee transaction to be one of buing/selling labor. To me, training AI on keystrokes nudges the deal towards selling one's "soul" next to other dystopian tropes like brain implants and work toilets that analyze excretions.<p>You are correct that employers own the laptops and can install anything they want, which is why I never do anything other than work there - the farthest I will go is participate in employer-hosted shitpost groups/channels, which are <i>not</i> anonymous, and they are free to train their models on that.<p>1. <a href="https://www.youtube.com/watch?v=S1980WfKC0o" rel="nofollow">https://www.youtube.com/watch?v=S1980WfKC0o</a>
>Why do people expect to have a right to do non-work related stuff on the job?<p>Like use the restroom? Personally, I'm not a slave. I am getting more and more used to the idea of having to push back on those who do exhibit such a mentality. Y'all are beginning to become a threat to the rest of us.
It's kind of funny to see how people here are reacting to the world they built when it finally comes to them
Meta: look, you don't <i>have</i> to wear a diaper while you work, but those that do are 87% more likely to get promoted! The choice is yours!
Companies pay their employees to build things. They do not pay their employees for their likeliness or the inner workings of their brains. Meta is trying to get the latter by keystroke tracking. It is an overreach in that context.<p>If they just want to monitor your computer for the purposes of productivity tracking, that is in their right, imo - just a shitty thing to do.
You would love the world of Severance! Drop your humanity and individuality at the door. Become a mindless drone
I don’t care if a company monitors which websites I go to on a work computer, what applications I run or what I say on Slack.<p>On the other hand I would be looking for another job if they had keyloggers or were taking screenshots even if they said anything about me shopping on Amazon or randomly browsing Hacker News or any website that wasn’t gaming or Netflix during work hours.<p>Heck I use to travel a lot more for business and I used my work laptop for Netflix and other streaming services in the hotel.<p>As long as I’m meeting performance standards it shouldn’t matter.
What a pathetic quisling attitude to life.
There was a lot of open dissent on workplace from what I recall.
Meta employees are not typically known for their deep concerns about privacy.
Highly ironic that people who spend their lives building things that invade everyone else's privacy might now whinge about privacy themselves.
That's not a bug, that's a feature
if you use your work machine at Facebook for dissent, you don't deserve a tech-adjacent job.
I don't know about you, but corporate has a message on my screen before I log in:<p>"this computer is property of WORK CORP, you have no expectation of private on this computer"<p>If you want privacy use a personal device....
It's absolutely wild to me that anyone has ever operated under any other assumption. If you want to complain about your boss do it at happy hour.
>data collected would not be used for performance assessments or any other purpose besides model training<p>And you expect <i>Meta employees</i>, of all people, to believe this?
These are the same employees that willfully code the largest spy network on the planet, so it seems like they are willing to believe a lot
In the midst of their 4th straight year of layoffs with another looming 20% cut coming, I'm guessing Meta employees are a tiny but suspicious.
Does not matter? I think the high compensation will be what will drive the compliance.
As true as "It's free and always will be".
I like to imagine they’ll mostly capture meta employees using AIs to do work.<p>Then they’ll deploy models trained on this, and begin capturing employees using AIs that are good at using AIs to do work.<p>Repeat a few times and they’ll start capturing the keystrokes from people mashing their heads into keyboards with dispair and exclaiming, “Why can’t these models do anything anymore!!”
I am to speculate that they are going to use this as an excuse to let people go without doing mass layoffs and having to pay severance. Training AI is just an excuse.
Many many moons ago I refused to implement a calendar event scraping system at Meta where it would look at all of your meetings on the calendar and do "analysis". IDK what ever happened to that task, I assume it died a death of no one else being willing to do it. This was probably 2011 or so, I can only imagine it has gotten so much worse.
White collar firms with a reputation for paying well don’t cheap out on severance. It’s a cheap way to get employees to sign some stuff reducing the risk of lawsuits, plus their unemployment insurance premiums stay lower.<p>It’s only once the business is having a cash crunch or will no longer need to hire competitive candidates that they start letting people go without severance.
While it would be a hilarious failure mode to encounter, this is actually a good thing!<p>These models already have the skills that humans were using them for, so either by training the models to use subagents or simply inlining the work done by the AI, you have a much easier time training the model to perform tasks from a human-distribution. The humans have done the work of making the human-distribution look more like an AI distribution.
> I like to imagine they’ll mostly capture meta employees using AIs to do work.<p>This will also give them data on which employees aren't using AI enough, and then they'll be PIP'd or let go.
Breaknews: Meta makes the ultimate AI version of "Cat sits on your keyboard" simulator
It will be interesting to see how the people who maintain (in my opinion) one of the worst offending organizations out there for invading your privacy - and generally treating you in a manner that lacks human decency - respond to having their privacy invaded, and being treated without basic decency.<p>I realize you can argue whatever is done at work should have no expectation of privacy, and I get that, but as an employer myself I've always felt that schemes like keyboard and mouse tracking are going a chasm too far. Your employees are human beings not robots. In the older context of corporate productivity tracking there are far better metrics available - starting with, I don't know, maybe <i>talking</i> to your employee and asking them how things are going.<p>I wouldn't have a problem if it were opt-in, but if this were foisted upon me I would surely quit.
Now this is some hacker news.<p>We’ve been moving towards a more and more tyrannical company controlled society for a long time and now they’re straight up doing hacking tactics to train machines to take our jobs. Doesn’t get much more bleak than that.
I'm so happy that EU and UK have laws against this kind of thing and so I will still be able to work somewhere in the future(TBD what future means, though).
Everybody will be a serf under technofeudalism.
Original source: <a href="https://www.reuters.com/sustainability/boards-policy-regulation/meta-start-capturing-employee-mouse-movements-keystrokes-ai-training-data-2026-04-21/" rel="nofollow">https://www.reuters.com/sustainability/boards-policy-regulat...</a>
Every day I grow more and more glad that I turned down a Meta offer. It was probably a hire-to-fire offer anyway, not based on any engineering prowess on my part. Still, I couldn't be more relieved I dodged that bullet.
Growing up we learned about _Slaughterhouse 5_ and _Cat's Cradle_ by Kurt Vonnegut. But there's not enough discussion or awareness of _Player Piano_. Incredibly prescient. These kinds of dystopic headlines are exactly the kind of thing you'd see in the book.
Player Piano is a 1952 Sci-fi novel by Vonnegut which explores the social and economic impact of automation replacing labor. If I recall correctly (I read this 15+ years ago) it is told from the perspective of one of the last people with an actually useful job, a person who's job it is to fix the machines that automated away jobs.
Because ends justify means. To quote Boz himself:<p>“ The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is <i>de facto</i> good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.”
> connect more people more often is de facto good<p>i've heard it described that evil is that which believes itself to be good without exception. i think i'm starting to agree...
“ That's why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do bring more communication in. The work we will likely have to do in China some day. All of it.”
It seems like every tech company is moving towards the sweatshop model pioneered by CrossOver/Trilogy, treating engineers as human CPUs at best, monitored 24/7.
For context, when the article says "a list of work-related apps and websites," this includes Google properties like gmail, docs, etc, and social media websites like Facebook and Instagram, with no provision for excluding personal accounts.
No one intelligent should be logging into their personal accounts on their work devices in any case - it's always been the case (at least in the US) that companies can do whatever invasive scanning they want on devices they own.
At meta your personal FB account is your work account. I had to create one to get paid. It’s the same identity used in internal systems.
Yes, but, so what? It isn't a license to train AI on employee personal information.<p>That said -- social media websites were later removed from the "work-related" list. So there was at least some recognition it was overreach and did not match the stated justification.
You know you are at work and monitored.<p>You can browser personal accounts from your phone.
Yeah automatically assume everything on your work computer is available for your employer to see. And everything you do on your own device when connected to their WiFi or VPN.<p>I’m surprised this needs to be said out loud.
on your phone <i>not</i> connected to corp wifi
And Ideally not connected to company WiFi
>You know you are at work and monitored.<p>unless you're in a jurisdiction that has anti-surveillance workplace laws, which if you don't should probably think about before Mark Zuckerberg gets the idea to monitor to your body temperature from below the waistline
Do most people who work in AI companies realize that if this buildup of reasoning models succeeds at what every tech CEO is aiming for, all of them will be out of a job?
They have nothing else to do. Someone needs to be able to justify their position by creating stupid changes like this to create a line item on their LinkedIn.<p>Meanwhile, nobody seems focused on capturing CEO’s data for AI training.
For those saying that this is fine because company computers are company property...<p>This is like going to work in a drug-lab where everyone is required to strip naked to ensure no "product" can be smuggled out. It's a zero trust environment at first blush, with the added terror of it being used to replace you with AI.<p>People working <i>naked</i> in a drug lab have more job security than meta employees and an equivalent level of respect and trust from their employer. However, they can't unionize because they have no legal protections. Their employer could literally point a gun at them if they complained. That isn't the case for Meta employees. Just sayin'.
Meta going all in on their brand with this.<p>Someone had to do it, distasteful though it may be. Could be quite hilarious what it learns in the process.
I'm so excited to interview for a career at Meta!<p>Also, why are the investors not suing the legs off of Zuck for the whole meta verse debacle? It is a scam and pure fraud. Also dumb name, sue for that too. Should have just renamed it meeme.
this would be a good time for Meta employees to reconsider their life choices.
The irony of this is so strange..
Taking dystopia aside, without a lot more context I don't quite get how the captured data will be particularly useful to train models for say software engineering. If someone can shed light - thanks!
They have a lot of internal tools. My guess is that it’s to train the model to click around the internal tools
for software engineering? not because of the typing.<p>the signal is every time a human has to grab the wheel. that's a label for what the agent still misses.
Why do we allow this?
Now that the early 10s dev worship era is officially over, all pretensions of "making the world a better place" and being nice have been dropped and devs shall remember what it feels like to be a replaceable cog that can be swapped the way we used to do with phone wallpapers.
I guess this is why they acquired <a href="https://www.limitless.ai/" rel="nofollow">https://www.limitless.ai/</a> ?
Wasn't it a few months ago that some engineer leaked that XAI was building 'Human Emulators'. This is either Meta's attempt at the same or just a blatant lie to make sure their engineers aren't slacking off. I've heard the workload has more than doubled for those who weren't laid off which is the only reason I think it might not be a employee monitoring system as I don't think anyone there can afford to not work hard.
Maybe this is exactly why Meta poached Alexandr Wang. Data capturing is an heirloom technique passed down from his Scale AI days
After all the layoffs, labeling people as underperformers while laying off, etc. can they stoop any lower? Why TF would anyone in their right mind would want to join this company?
They pay well. That’s it. That’s the only argument for working for facebook.<p>They don’t add anything beneficial to society. They exist to sell ads.
big prestige from people who still thing facebook/instagram are positives in the world i guess
[dupe] <a href="https://news.ycombinator.com/item?id=47851242">https://news.ycombinator.com/item?id=47851242</a><p>[dupe] <a href="https://news.ycombinator.com/item?id=47851086">https://news.ycombinator.com/item?id=47851086</a>
Training on future vi macros. Just<p><pre><code> kk1Gi// file.js<Esc>M/func<Enter>o let<Esc>``
</code></pre>
Taking screenshots too.
> to improve the company's models in areas where they still struggle, like choosing from dropdown menus and using keyboard shortcuts<p>Seems like a strange approach in general. I'd have assumed you'd just have it use accessibility features to get at things, if there is no other interface.
a CLI with a man page should already be usable by an LLM.
Knowing how to make an accessible website is so rare that companies pay me money to do it for them. I wish it was good enough for people, much less companies, to rely on.
Even if no attempt at proper accessibility was made, it's still generally far easier to attempt to find an HTML (or other form of UI) element, than to attempt to scroll to the right spot and use visual inspection to find things.
I wonder if this screen + mouse + keyboard (+ camera + speaker + mic) interface is really the right level of abstraction to model a “digital entity”<p>Sure, you can do everything a human can, but it also seems VERY inefficient<p>As an alternative, maybe you could just do network in/out?
It's the same approach as Windows Recall, but all data remains sovereign to the company generating it.
for agent agents we have ACP [0] surely their time would be better spent builing this sort of abstraction for computer use then simple teaching an AI to use a mouse?<p>The computer UI is the way it is because that is optimal for humans, if your plan is to replace humans why not just replace the whole stack os and all to something these models already know how to use?<p>[0] <a href="https://zed.dev/blog/acp-registry" rel="nofollow">https://zed.dev/blog/acp-registry</a>
Honest question, does most of Meta's creepiness trickle down directly from Zuckerberg, or is their entire executive also this creepy?<p>Does the executive know better at this point but have toasted the culture and no one can fight against it anymore?
Culture is often set top down. Look at the current US administration for a public example. People at the top will choose people who agree with them or who are sycophants. Top execs also chose this job and zuck because they have no moral issues with what the company does... Often if you closely associate with someone creepy or immoral it's because you care more about money and power.
In my experience, a LOT of company culture trickles down from the top. Some of this is by design e.g. CEO consciously and publicly rewards certain traits/behaviors. Some of this is accidental in the sense that CEOs, like many humans, have both stated and expressed preferences.<p>There is also this effect:<p>- CEO says "the lights are a bit dim in here"<p>- that turns into "We need to change all of the lightbulbs in here immediately!"<p>(this is especially true in firms where the CEO cares a lot about being proactive).<p>Two great posts/stories about this:<p>1. This post about smart employees "reading their managers minds": <a href="https://yosefk.com/blog/people-can-read-their-managers-mind.html" rel="nofollow">https://yosefk.com/blog/people-can-read-their-managers-mind....</a><p>2. In Michael Crichton's book Disclosure there is a great line: "Why did you dress casually instead of wearing a suit? Is it b/c you wanted to do that or b/c the CEO did it and you wanted to show you were part of the team??"
There are lots of leaked emails showing Zuck is creepy. Recent one I saw where he is directly in the conversations about targeting teens/children. There's a twitter account [1] that posts emails from tech execs that have come out in legal proceedings - it shows the people at the top are very much informed and driving what happens in their companies.<p>[1] <a href="https://x.com/TechEmails" rel="nofollow">https://x.com/TechEmails</a>
> is their entire executive also this creepy?<p>What does this link tell you?
<a href="https://www.thedailybeast.com/facebooks-sheryl-sandberg-told-female-aide-come-to-bed-explosive-book-claims/" rel="nofollow">https://www.thedailybeast.com/facebooks-sheryl-sandberg-told...</a>
Thank that 'Super'AILab supervisor from ScaleAI, Alexander Wang; this guy is really hilarious. He directly turned Meta into a Chinese company (just like how ScaleAI exploits its employees), and so far, I haven't seen him deliver anything that matches his annual salary. Considering that what he does is AI infrastructure, even cheap-to-the-point-of-ridiculous cheap labor for training data annotation. I don't think he's suitable for this kind of big-picture AI research.
I suspect most employees know better, but Meta pays very well and they just want to maximize their salary and their tenure in the company. Also it seems Zuckerberg has became more creepy lately, very much in phase with the current Zeigweist.
Seems like Skan AI's solution. They have a few Fortune 500 companies as clients doing exactly the same thing as Meta - capturing keyboard and mouse clicks to ultimately do next level process automation.
Gotta feed the beast some how.
I can’t imagine being mad that the data collection company that I work for now wants data on _me_<p>Really though it seems reasonable to me. They want data to train AI, and their employees are obviously a large source.<p>They could already track your every click. They have root on your work MacBook. Most employers do.
I can't imagine a more useless dataset to collect, proving that Meta might have reached the peak of the graph of (reach/grasp)/time and the numerator is about to plummet spectacularly.
People are just being misused to train their own replacement.<p>Always thought Meta was a god awful run company and this just brings home the cake
Eventually every word spoken as well, which is already the case for most meetings, but not yet for individual interactions. Every bit of information at companies will be accessible to AI. This will allow automation all the way up to the C suite.
Probably aren’t seeing the promised productivity improvements of AI in terms of shipping production code and not just “super demos” that aren’t robust. So they want to see if the withers are really putting in the time or if the models struggle past a level of complexity that stalls or reverses early gains.
I mean - uh - gotta find all the signals that may exist.<p>I...admire the diligence
From company metrics I have found that developers who make a lot of mouse movements correlate with weaker performance reviews. Something to think about.
‘Meta spokesperson Andy Stone said the data collected would not be used for performance assessments or any other purpose’<p>Horseshit.<p>1. Employees are being asked to train AI to replace them.<p>2. Performance assessments will 100% be impacted. No question.<p>Thinking back on the OTT interview experience that Facebook helped pioneer, imagine making it through that, getting paid a massive sum of money BUT barely getting by on it because of the location, then they drop this crap on you?<p>Big Brother is always watching.
As everybody knows, key strokes and mouse movements are the things that solve problems, definitely the data worth capturing for AI training.
Fucking insane.<p>Optimizing ourselves to death.<p>Capitalism is asleep at the wheel with its foot stuck on the gas pedal.
When you will think about it, what actually useful data are you getting from this exercise? It is like strapping camera on a manual laborer so you can see what he sees, but you don't get data about the touch and grip and you won't get data about why he is doing specific moves.
Meta can even afford to destroy themselves and their own employees.<p>More proof that they do not care about you at all. This is Meta's way of moving fast and destroying everything at all costs.
Hey fellow engineers.<p>I know you've long been hypnotized by libertarianism and the cult of the individual.<p>Maybe it's time you reconsider in light of the overwhelming evidence that the capitalist class is, in fact, not your friend.<p>The only known way for workers to assert their rights is collective action. Alone, you are weak and replacable. Together, we are strong.<p>It's time for a proper tech worker's union, to give us some fangs to claw back our dignity with.
> Meta (META.O), opens new tab is installing new tracking software on U.S.-based employees’ computers to capture mouse movements, clicks and keystrokes for use in training its artificial-intelligence models, part of a broad initiative to build AI agents that can perform work tasks autonomously, the company told staffers in internal memos seen by Reuters.<p>> The tool will run on a list of work-related apps and websites and will also take occasional snapshots of the content on employees’ screens for context, according to one memo, posted by a staff AI research scientist on Tuesday in a dedicated internal channel for the company's model-building Meta SuperIntelligence Labs team.<p>ALL YOUR DATA IS BELONG TO US<p>¯\_(ツ)_/¯
[dead]
[flagged]
Is US mouse movement different from European? Is it called sparkling mouse movement if it comes from California?
US mouse movements are obviously very vigilant, some people say they're the strongest mouse movements ever seen.<p>Since this is a serious website: I'd be genuinely curious how mouse velocity and trajectories differ between cultural and environmental settings
(apart from hardware, that's boring and should be normalized).<p>There was a time when studies made headlines that were exactly about the relationship between mouse movement, typing etc, and psychiatric disorders as well as physical health.<p>Obviously, both are related.<p>If you ask me, Ad tech would probably be able to tell your denominated faith using this data, when there's enough of it...
Genuine question: would right to left language based interfaces have different type of movements and thus training data than left to right language ones?
Not sure how GDPR could help. They can make generating data for their model your actual job as per contract I guess.
[dead]
[flagged]
As everybody knows, key strokes and mouse movements are the things that solve problems, definitely the data worth capturing for AI training.
Data collection isn’t new. The training is.
But this is a <i>good</i> thing. Let me explain.
Imagine a society where an individual’s rights are prioritised and where society is dedicated to the best interests of each citizen (not desires or wants but reasonable considered best interests)<p>Now imagine a society where your individual daily actions are recorded, reviewed and helpfully advised upon.<p>Millions of people making millions of actions each day and all recorded compared and sifted for positive feedback and improvement overall.<p>Just how far ahead would such a society pull compared to one that stays at today’s level. Compared to one that used totalitarian methods enabled by such surveillance?<p>The difference between Soviet and Western Europe was not the tech, it was the trust.<p>If we can build a society with f trust then this tech will turbo charge us.<p>If …