I was one of the first backers of the Oculus Rift Kickstarter. When I got it I decided eye tracking was going to be huge for VR, so as a side project I cut a hole in my Rift and built my own eye tracker. I posted it on Hacker News: https://news.ycombinator.com/item?id=7876471
A few days later the CTO of a small eye tracking startup gave me a call. I quit Google and joined them. I built a (novel at the time) deep neural net based VR eye tracking system for them, and less than two years later Google acquired us.
That's nothing, the "Silicon Valley" part is what happened after the acquisition. Google was so paranoid about "user data" that they forced us to delete every scrap of our training datasets, going as far as physically shredding any laptop that had contained the data. We had a memorable evening at the office with the onsite shredder truck. It couldn't shred the MacBooks due to the aluminum case and we ended up smashing them with hammers in the parking lot.
When we got to Google of course privacy concerns blocked new data collection until we completed an overengineered project to build a shiny new database with access controls and stuff. About a year later with no technology progress to speak of, the whole eye tracking project was shelved due to a strategy pivot from higher up (unrelated to anything we were or weren't doing) and we all went our separate ways. Fun times!
I hear these things about smashing hardware from other places too. Absolutely crazy… An ex colleague was smashing prototypes of wireless chargers for cars for weeks.
you said you were ex-google, how come the privacy policy surprised you? privacy around user-data is big concern, especially for tech giants. Some companies are forced to leave entire Markets, to adhere to privacy/use-data policies.
I first joined Google in 2010. Back then user data concerns were about good security and engineering. There wasn't a big bureaucracy around regulatory compliance. GDPR didn't even come into effect until 2018.
Our data wasn't actually "user data" in the sense Google usually deals with. It wasn't data collected incidentally after click-through consent from billions of random people on the internet as they use their computers in daily life. It was ~100 people, many of them employees, who voluntarily participated in a one hour in-person data collection session after signing ink-on-paper consent forms, who received monetary compensation for the use of their data, and the data was used solely for training and evaluating models and not cross-linked with any other data for any other purpose. But Google's privacy bureaucracy wanted to apply the same processes and standards as user data collected continuously from billions of internet users.
But of course ultimately it didn't matter. The privacy bureaucracy issues were not at all related to the division-wide strategy pivot that killed our team (and many others). It just made my life very frustrating for the year or so before that happened. And I understand why the bureaucracy exists. In today's climate the PR risk to Google from a hit piece headline like "Google scans your eyeballs and we have the leaked data" is much higher than the probable benefit from a small team's engineering work. So they err on the side of slowing things way down. But that doesn't make it any less frustrating for that small team. And it makes me quite pessimistic about the future development of new technology at Google. I expect that their continuing failure to deploy AI anywhere near as good as GPT-4 can be attributed to similar locally rational risk-averse bureaucracy...
by the time a major jurisdiction like the EU brought GDPR, regulatory compliance was already long-overdue (as usual, Govs plays catch-up with the business world), hence what followed was a rapid rise in "bureaucracy" (i guess). For example, if a company falls short in compliance (say Cambridge Analytica) issue bubbles-up to the Network (say FB or ByteDance), if FB fails it bubbles up to the Marketplace (say Apple), if Apple fails at this level, it's so high up Govs get involved to the point that it could trigger inter-continental trade wars.
Hence we're seeing Apple, Bytedance, Amazon (no doubt Google) etc make regulatory compliance a bigger part of their core business than ever before - prevention in favor of treatment.
Your team's case seems unfortunate, given the narrow scope in the trials. My initial guess is that anything involving eye-scanning could trigger Biometric ID (iris recognition) compliance worries. I get your concern about future tech developments, I also think businesses (startups) have to find new ways to account (adapt) for these changes/requirements - "bureaucracy" will naturally increase.
A few days later the CTO of a small eye tracking startup gave me a call. I quit Google and joined them. I built a (novel at the time) deep neural net based VR eye tracking system for them, and less than two years later Google acquired us.