Lidars come down in price ~40x.<p><a href="https://cleantechnica.com/2025/03/20/lidars-wicked-cost-drop/amp/" rel="nofollow">https://cleantechnica.com/2025/03/20/lidars-wicked-cost-drop...</a><p>Meanwhile visible light based tech is going up in price due to competing with ai on the extra gpu need while lidar gets the range/depth side of things for free.<p>Ideally cars use both but if you had to choose one or the other for cost you’d be insane to choose vision over lidar. Musk made an ill timed decision to go vision only.<p>So it’s not a surprise to see the low end models with lidar.
I wonder if ubiquity doesn’t effect the lidar performance? Wouldn’t the systems see each other’s laser projections if there are multiple cars close to each other? Also is
LIDAR immune to other issues like bright 3rd party sources? At least on iPhone I’m having faceid performance degradation. Also, I suspect other issues like thin or transparent objects net being detected.<p>With vision you rely on external source or flood light. Its also how our civilization is designed to function in first place.<p>Anyway, the whole self driving obsession is ridiculous because being driven around in a bad traffic isn’t that much better than driving in bad traffic. It’s cool but can’t beat a the public infrastructure since you can’t make the car dissipated when not in use.<p>IMHO, connectivity to simulate public transport can be the real sweet spot, regardless of sensor types. Coordinated cars can solve traffic and pretend to be trains.
LIDAR systems use timing, phase locking, and software filtering to identify and eliminate interference from other units. There is still risk of interference, resulting in reduced range, noise, etc.
I'd assume not since Waymo uses lidar and has entire depots of them driving around in close proximity when not in use.
I'm not a self-driving believer (never had the opportunity to try it, actually), but I'd say bad traffic would be the number one case where I'd want it. I don't mind highway driving, or city driving if traffic is good, but stop and go traffic is torture to me. I'd much rather just be on my phone, or read a book or something.<p>Agreed that public transportation is usually the best option in either case, though.
To me any kind of driving is torture. I don't want the responsibility, the risk, the chance of fines if I miss a speed sign somewhere. And if my car could self drive I could spend the time usefully instead of wasting it on driving. It would be amazing.<p>Right now I don't even have a car but for getting around outside of the city it's difficult sometimes.
Yeah, I feel ya. I don't mind it, but I'm far from loving it. What particularly stresses me out is how I can be screwed even doing everything correctly, if someone else screws up.<p>All reasons why I think public transit is the better solution over self driving cars. They're generally much safer, and also you get to do something while you're on the go. Pretty nifty, I think.
Yes that's why I don't own a car. In a big city public transit is amazing. I spend 20 bucks a month on unlimited travel. That won't even buy me a headlight bulb for a car these days lol. When I still owned one I had to pay for the car, insurance, road tax, fuel, maintenance, parking, tolls. It felt like it was dragging me down the whole time. It's insane how much costs add up.<p>I love public transport and an added benefit is: I don't have to go back to where I left it. I often take a metro from A to B, walk to C and then get a bus back to A or something. Can't do that with a car, as such I tend to walk a lot more now. Because it's a hassle-free option now. The world seems more open for exploration when I don't have to worry about returning to the car, or having a drink, or the parking meter expiring. I really don't get that people consider cars freedom.<p>Of course once you go outside the city it's a different story, even here in Europe. Luckily I don't need to go there so much. But that's something that should be improved. On the weekend here in the city the metro runs 24/7 and the regional trains really should too but they don't.
I used to think like that before I started driving, it's way more structured and harder to screw up than you'd think.<p>Avoiding potholes is the hardest part of driving, really.
Unfortunately in my region highway traffic is quite congested, and so called "adaptive cruise control" is a game changer. I find it reduces fatigue by a lot. Usually the trucks are all cruising at the speed limit and I just hang with them. I only change lanes if they slow down or there's an obstruction etc.
Driving in fog is the number one reason I want lidar looking out.
There are regular 100+ car pileups in the central California valley due to fog. Cars crash in a lot of situations because the driver simply can't see. We need something better than vision to avoid these kinds of accidents.<p>Coordinated cars won't work unless all cars are built the same and all maintained 100% the same and regularly inspected. You can't have a car driving 2 inches from the car in front, if it can't stop just as fast as the car in front. People already neglect their cars, change brake compounds, and get stuck purchasing low quality brake parts due to lack of availability of good components.<p>Next time you see some total beater driving down the road, imagine that car 2 inches off your rear bumper, not even a computer can make up for poor maintenance. Imagine that 8000lb pickup with it's cheap oversized tires right in your rearview mirror with it's headlights in your face. It's not going to be able to stop either.
A combination of cameras, lidar, ridar, ultrasonic fused together have a strong sense of perception since they fill in each other's gaps. (short, long, different spectrums of electro-magnetic spectrum / sound).<p>The good news is they're all commodity hardware prices now.<p>Tesla removing radar and parking ultrasonic sensors was a self own. Computer vision inference is pretty bad when all the camera sees is a while wall when backing up.<p>Fog - Radar will perceive the car. Multi car crash, long range radar picks it up.<p>Bright glare from sun, lidar picks it up. Lidar misses something, camera picks it up.<p>Waymo has the correct approach on perception. Jam with sensor so they have superhuman vision of environment around.
They're wideband EM devices, so the problem of congested spectrum can be dealt with by the same sort of techniques used by WiFi and mobile phone services.
i imagine seismic has already well solved a lot of that.<p>you know a lot about the light you are sending, and what the speed of light is, so you can filter out unexpected timings, and understand multiple returns
If you have to choose one over the other, it has to be vision surely?<p>Even ignoring various current issues with Lidar systems that aren’t fundamental limitations, large amounts of road infrastructure is just designed around vision and will continue to be for at least another few decades. Lidar just fundamentally can’t read signs, traffic lights or road markings in a reliable way.<p>Personally I don’t buy the argument that it has to be one or the other as Tesla have claimed, but between the two, vision is the only one that captures all the data sufficient to drive a car.
For one, no one is seriously contemplating a LIDAR-only system, the question is between camera+LIDAR or camera-only.<p>> Lidar just fundamentally can’t read signs, traffic lights or road markings in a reliable way.<p>Actually, given that basically every meaningful LIDAR on the market gives an "intensity" value for each return, in surprisingly many cases you could get this kind of imaging behavior from LIDAR so long as the point density is sufficient for the features you wish to capture (and point density, particularly in terms of points/sec/$, continues to improve at a pretty good rate). A lot of the features that go into making road signage visible to drivers (e.g. reflective lettering on signs, cats eye reflectors, etc) also result in good contrast in LIDAR intensity values.
> camera+LIDAR<p>It's like having 2 pilots instead of 1 pilot. If one pilot is unexpectedly defective (has a heart attack mid-flight), you still have the other pilot. Some errors between the 2 pilots aren't uncorrelated of course, but many of them are. So the chance of an at-fault crash goes from p and approaches p^2 in the best case. That's an unintuitively large improvement. Many laypeople's gut instinct would be more like p -> p/2 improvement from having 2 pilots (or 2 data streams in the case of camera+LIDAR).<p>In the camera+LIDAR case, you conceptually require AND(x.ok for all x) before you accelerate. If only one of those systems says there's a white truck in front of you, then you hit the brakes, instead of requiring both of them to flag it. False negatives are what you're trying to avoid because the confusion matrix shouldn't be equally weighted given the additional downside of a catastrophic crash. That's where two somewhat independent data streams becomes so powerful at reducing crashes, you really benefit from those ~uncorrelated errors.
"In the camera+LIDAR case, you conceptually require AND(x.ok for all x) before you accelerate."
This can be learnt by the model. Let's assume vision is 100% correct, the model would learn to ignore LIDAR, so the worst case scenario is that LIDAR is extra cost for zero benefit.
> Let's assume vision is 100% correct<p>This is not going to be true for a very long time, at least so long as one's definition of "vision" is something like "low-cost passive planar high-resolution imaging sensors sensitive to the visual and IR spectrum" (I include "low-cost" on the basis that while SWIR, MWIR, and LWIR sensors do provide useful capabilities for self-driving applications, they are often equally expensive, if not much more so, than LIDARs). Camera sensors have gotten quite good, but they are still fundamentally much less capable than the human eyes plus visual cortex in terms of useful dynamic range, motion sensitivity, and depth cues - and human eyes regularly encounter driving conditions which interfere or prohibit safe driving (e.g. mist/ fog, heavy rain/snow, blowing sand/dust, low-angle sunlight at sunrise/sunset/winter). One of the best features of LIDAR is that it is either immune or much less sensitive to these phenomena at the ranges we care about for driving.<p>Of course, LIDAR is not without its own failings, and the ideal system really is one that combines cameras, LIDARs, and RADARs. The problem there is that building automotive RADAR with sufficient spatial resolution to reliably discriminate between stationary obstacles (e.g. a car stalled ahead) and nearby clutter (e.g. a bridge above the road) is something of an unsolved problem.
The worst case scenario is that LIDAR is a rapidly falling extra cost for zero benefit? Sounds like it's a good idea to invest into cheap LIDAR just in case the worst case doesn't happen. Even better, you can get a head start by investing in the solution early and abandon it when it has obsolete.<p>By the way, Tesla engineers secretly trained their vision systems using LIDAR data because that's how you get training data. When Elon Musk found out, he fired them.<p>Finally, your premise is nonsensical. Using end to end learning for self driving sounds batshit crazy to me. Traffic rules are very rigid and differ depending on the location. Tesla's self driving solution gets you ticketed for traffic violations in China. Machine learning is generally used to "parse" the sensor output into a machine representation and then classical algorithms do most of the work.<p>The rationale for being against LIDAR seems to be "Elon Musk said LIDAR is bad" and is not based on any deficiency in LIDAR technology.
Isn’t that also like having two watches? You’ll never know the time
If you're on a desert island and you have 2 watches instead of 1, the probability of failure (defined as "don't know the time") within T years goes from p to p^2 + epsilon (where epsilon encapsulates things like correlated manufacturing defects).<p>So in a way, yes.<p>The main difference is that "don't know the time" is a trivial consequence, but "crash into a white truck at 70mph" is non-trivial.<p>But it's the same statistical reasoning.
It's different because the challenge with self-driving is not to know the exact time. You win for simply noticing the discrepancy and stopping.<p>Imagine if the watch simply tells you if it is safe to jump into the pool (depending on the time it may or may not have water). If watches conflict, you still win by not jumping.
I was responding to the parent who said if you had to make a choice between lidar and vision, you'd pick lidar.<p>I know there are theoretical and semi-practical ways of reading those indicators with features that are correlated with the visual data, for example thermoplastic line markings create a small bump that sufficiently advanced lidar can detect. However, while I'm not a lidar expert, I don't believe using a completely different physical mechanism to read that data will be reliable. It will surely inevitably lead to situations where a human detects something that a lidar doesn't, and vice versa, just due to fundamental differences in how the two mechanisms work.<p>For example, you could imagine a situation where the white lane divider thermoplastic markings on a road has been masked over with black paint and new lane markings have been painted on - but lidar will still detect the bump as a stronger signal than the new paint markings.<p>Ideally while humans and self driving coexist on the same roads, we need to do our best to keep the behaviour of the sensors to be as close to how a human would interpret the conditions. Where human driving is no longer a concern, lidar could potentially be a better option for the primary sensor.
> For example, you could imagine a situation where the white lane divider thermoplastic markings on a road has been masked over with black paint and new lane markings have been painted on - but lidar will still detect the bump as a stronger signal than the new paint markings.<p>Conflicting lane marking due to road work/changes is already a major problem for visual sensors and human drivers, and something that fairly regularly confuses ADAS implementations. Any useful self-driving system will already have to consider the totality of the situation (apparent lane markings, road geometry, other cars, etc) to decide what "lane" to follow. Arguably a "geometry-first" approach with LIDAR-only would be more robust to this sort of visual confusion.
Everyone is missing the point, including Karpathy which is the most surprising because he is supposed to be one of the smart ones.<p>The focus shouldn't be on which sensor to use. If you are going to use humans as examples, just take the time to think how a human drives. We can drive with one eye. We can drive with a screen instead of a windshield. We can drive with a wiremesh representation of the world. We also use audio signals quite a bit when when driving as well.<p>The way to build a self driving suite is start with the software that builds your representation of the world first. Then any sensor you add in is a fairly trivial problem of sensor fusion + Kalman filtering. That way, as certain tech gets cheaper or better or more expensive and worse, you can just easily swap in what you need to achieve x degree of accuracy.
> ...just take the time to think how a human drives...<p>We truly have no understanding of how the human brain really models the world around us and reasons over motion, and frankly anyone claiming to is lying and trying to sell something. "But humans can do X with just Y and Z..." is a very seductive idea, but the reality is "humans can do X with just Y, Z, and an extremely complex and almost entirely unknown brain" and thus trying to do X with just Y and Z is basically a fool's errand.<p>> ...builds your representation of the world first...<p>So far, I would say that one of the very few representations that can be meaningfully decoupled from the sensors in use is world geometry, and even that is a very weak decoupling because the ways you performantly represent geometry are deeply coupled with the capabilities of your sensors (e.g. LIDAR gives you relatively sparse points with limited spatial consistency, cameras give you dense points with higher spatial consistency, RADAR gives you very sparse targets with velocity). Beyond that, the capabilities of your sensors really define how you represent the world.<p>The alternative is that you do not "represent" the world but instead have that representation emerge implicitly inside some huge neural net model. But those models and their training end up even more tightly coupled to the type of data and capabilities of your sensors and are basically impossible to move to new sensor types without significant retraining.<p>> Then any sensor you add in is a fairly trivial problem of sensor fusion + Kalman filtering<p>"Sensor fusion" means everything and nothing; there are subjects where "sensor fusion" is practically solved (e.g. IMU/AHRS/INS accelerometer+gyro+magnetometer fusion is basically accepted as solved with EKF) and there are other areas where every "fusion" of multiple sensors is entirely bespoke.
Sorry if this is obvious, but are there actually any systems that "choose one over the other"? My impression's always been it was either vision + LIDAR, or vision alone. Are there any examples of LIDAR alone?
Don't ultimately even the ones which are vision + LIDAR ultimately have to choose priority in terms of one or the other for "What do you do if LIDAR says it is blocked and sight says it is clear' or visa-versa?" Trying to handle edge-cases where say LIDAR thinks that sprinker mist is a solid object and to swerve to avoid it and say vision which thinks that an optical illusion is a real path and not a brick wall.
Since the current traffic infrastructure was built for human drivers with vision, you’ll probably need some form of vision to navigate today’s roads. The only way I could picture lidar only working would be on a road system specially made for machine driving.
Not that I'm aware of, but I was referring to the claim in the parent post that if you had to choose it would be insane to choose vision over LIDAR.
Roombas
Roomba (specifically the brand of the American company iRobot) only added lidar in 2025 [1]. Earliest Roombas navigated by touch (bumping into walls), and then by cameras.<p>But if you use "roomba" as a generic term for robot vacuum then yes, Chinese Ecovacs and Xiaomi introduced lidar-based robot vacuums in 2015 [2].<p>[1] <a href="https://www.theverge.com/news/627751/irobot-launches-eight-new-roombas-with-lidar-room-mapping" rel="nofollow">https://www.theverge.com/news/627751/irobot-launches-eight-n...</a><p>[2] <a href="https://english.cw.com.tw/article/article.action?id=4542" rel="nofollow">https://english.cw.com.tw/article/article.action?id=4542</a>
> Earliest Roombas navigated by touch (bumping into walls)<p>My ex got a Roomba in the early 2010s and it gave me an irrational but everlasting disdain for the company.<p>They kept mentioning their "proprietary algorithm" like it was some amazing futuristic thing but watching that thing just bump into something and turn, bump into something else and turn, bump into something again and turn again, etc ... it made me hate that thing.<p>Now when my dog can't find her ball and starts senselessly roaming in all the wrong directions in a panic, I call it Roomba mode.
> Earliest Roombas navigated by touch (bumping into walls)<p>My neighbour used to park like that; "thats what the bumpers are for - bumping"
Neato XV-11 introduced lidar in 2010. Sadly they're no more.
I don't think they would be as well accepted into peoples homes if they had a mobile camera on it. Didn't they already leak peoples home mappings?
For full self driving sure but the more regular assisted driving with basic ‘knows where other cars are in relation to you and can break/turn/alarm to avoid collisions’ as well as adaptive cruise control lidar can manage well enough.<p>I think fsd should be both at minimum though. No reason to skimp on a niw inexpensive sensor that sees things vision alone doesn’t.
Given a good proportion of his success has rested on somehow simplifying or commodifying existing expensive technology (e.g. rockets, and lots of the technology needed to make them; EV batteries) it's surprising that Musk's response to lidar being (at the time) very expensive was to avoid it despite the additional challenges that this brought, rather than attempt to carve a moat by innovating and creating cheaper and better lidar.<p>> So it’s not a surprise to see the low end models with lidar.<p>They could be going for a Tesla-esque approach, in that by equipping every car in the fleet with lidar, they maximise the data captured to help train their models.
It's the same with his humanoid robot. Instead of building yet another useless hype machine, why not simply do vertical integration and build your own robot arms? You have a guaranteed customer (yourself) and once you have figured out the design, you can start selling to external customers.
Because making boring industrial machinery doesn't sustain a PE ratio of about 300. Only promising the world does that.
> why not simply do vertical integration and build your own robot arms?<p>Robot arms are neither a low-volume unique/high-cost market (SpaceX), nor a high-volume/high-margin business (Tesla). On top of that it's already a quite crowded space.
The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was. He seemed to have drank his own koolaid back then.<p>And if he still doesn’t realize and admit he is wrong then he is just plain dumb.<p>Pride is standing in the way of first principles.
I think there’s room for both points of view here. Going all in on visual processing means you can use it anywhere a person can go in any other technology, Optimus robots are just one example.<p>And he’s not wrong that roads and driving laws are all built around human visual processing.<p>The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.
Didn't waymo stop operating simply because they aren't as cavalier as Tesla, and they have much more to lose since they are actually self driving instead of just driver assistance? Was the lidar/vision difference actually significant?
The reports I’ve read said that some continued to attempt to navigate with the street lights out, but that the vehicles all have a remote confirmation where they try to call home to confirm what to do. That ended up self DDoSing Waymo causing vehicles to stop in the middle of the road and at intersections with their hazards on.<p>So to clarify, it wasn’t entirely a lidar problem it was an need to call home to navigate.
> Going all in on visual processing means you can use it anywhere a person can go in any other technology, Optimus robots are just one example.<p>Sure, and using lidar means you can use it anywhere a person can go in any other technology too.
> roads and driving laws are all built around human visual processing.<p>And people die all the time.<p>> The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.<p>Huh? Waymo is responsible for injury, so all their cars called home at the same time DOS themselves rather than kill someone.<p>Tesla makes no responsibility and does nothing.<p>I can’t see the logic the brings vision only as having anything to do lights out. At all.
> And people die all the time.<p>Yes... but people can only focus on one thing at a time. We don't have 360 vision. We have blind spots! We don't even know the exact speed of our car without looking away from the road momentarily! Vision based cars obviously don't have these issues. Just because some cars are 100% vision doesn't mean that it has to share all of the faults we have when driving.<p>That's not me in favour of one vs the other. I'm ambivalent and don't actually care. They can clearly both work.
> And people die all the time.<p>They do, but the rate is extremely low compared to the volume of drivers.<p>In 2024 in the US there were about 240 million licensed drivers and an estimated 39,345 fatalities, which is 0.016% of licensed drivers. Every single fatality is awful but the inverse of that number means that 99.984% of drivers were relatively safe in 2024.<p>Tesla provided statistics on the improvements from their safety features compared to the active population (<a href="https://www.tesla.com/fsd/safety" rel="nofollow">https://www.tesla.com/fsd/safety</a>) and the numbers are pretty dramatic.<p>Miles driven before a major collision<p>699,000 - US Average<p>972,000 - Tesla average (no safety features enabled)<p>2.3 million - Tesla (active safety features, manually driven)<p>5.1 million - Tesla FSD (supervised)<p>It's taking something that's already relatively safe and making it approximately 5-7 times safer using visual processing alone.<p>Maybe lidar can make it even better, but there's every reason to tout the success of what's in place so far.
No, you're making the mistake of taking Tesla's stats as comparable, which they are not.<p>Comparing the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions?<p>Or the accident stats that don't count an accident any collision without airbag deployment, regardless of injuries? Including accidents that were sufficiently serious that airbags could not or were unable to deploy?
The stats on the site break it into major and minor collisions. You can see the above link.<p>I have no doubt that there are ways to take issue with the stats. I'm sure we could look at accidents from 11pm - 6am compared to the volume of drivers on the road as well.<p>In aggregate, the stats are the stats though.
> And people die all the time.<p>Most of them cannot drive a car. People have crashes for so many reasons.
What Tesla self driving is that? The one with human drivers? I don't believe they have gotten their permits for self driving cars yet.
I wonder how much of their trouble comes from other failures in their plan (avoiding the use of pre-made maps and single city taxi services in favor of a system intended to drive in unseen cities) vs how much comes from vision. There are concerning failure modes from vision alone but it’s not clear that’s actually the reason for the failure. Waymo built an expensive safe system that is a taxi first and can only operate on certain areas, and then they ran reps on those areas for a decade.<p>Tesla specifically decided not to use the taxi-first approach, which does make sense since they want to sell cars. One of the first major failures of their approach was to start selling pre-orders for self driving. If they hadn’t, they would not have needed to promise it would work everywhere, and could have pivoted to single city taxi services like the other companies, or added lidar.<p>But certainly it all came from Musk’s hubris, first to set out to solve the self driving in all conditions using only vision, and then to start selling it before it was done, making it difficult to change paths once so much had been promised.
> And if he still doesn’t realize and admit he is wrong then he is just plain dumb.<p>The absolute genius made sure that he can't back out without making it bleedingly obvious that old cars can never be upgraded for a LIDAR-based stack. Right now he's avoiding a company-killing class action suit by stalling, hoping people will get rid of HW3 cars, (and you can add HW4 cars soon too) and pretending that those cars will be updated, but if you also need to have LIDAR sensors, you're massively screwed.
> The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was.<p>History is replete with smart people making bad decisions. Someone can be exceptionally smart (in some domains) <i>and</i> have made a bad decision.<p>> He seemed to have drank his own koolaid back then.<p>Indeed; but he was on a run of success, based on repeatedly succeeding deliberately against established expertise, so I imagine that Koolaid was pretty compelling.
> The ways in which Musk dug himself in when experts predicted<p>This had happened a load of times with him. It seemed to ramp up around paedo sub, and I wonder what went on with him at that time.
To be frank, no one had a crystal ball back then, and stuff could go either way with uncertainty in both hardware and software capabilities. Sure Lidars were better even back then, but the bet was on catching up on them.<p>I hate Elon's personality and political activity as much as anyone, but it is clear from technical PoV that he did logical things. Actually, the fact that he was mistaken and still managed to not bankrupt Tesla is saying something about his skills.
Musk has for a long time now been convinced that all problems in this space are solvable via vision.<p>Same deal with his comments about how all anti-air military capability will be dominated by optical sensors.
Will there be major difference in ride experience when you take a Waymo vs Robotaxi?
Between anti-Musk sentiment, competition in self driving and the proven track record of Lidar, I think we’ll start seeing jurisdictions from Europe to New York and California banning camera-only self-driving beyond Level 3.
Nah, you don't need to ban anything. Just force the rule, that if company sells self driving, they are also taking full liability for any damages of this system.
Why is it preferable to wait for people to die and then sue the company instead of banning it in the first place?
People die in car crashes all the time. Self driving can kill a lot of people and still be vastly better than humans.
They don't have to die first. The company can avoid the expense by planning how not to kill people.<p>If you charged car makers $20m per pedestrian killed by their cars regardless of fault you'd probably see much safer designs.
By this logic, then we should also create a rule for regular, non-self-driving that says, if you have a car accident that kills someone, all your wealth is taken away and given to the victim's family. If we had a rule like this, then "you'd probably see much safer driving". Are you willing to drive under those circumstances? I am sure you will say yes, but it does not make your suggestion any less ridiculous.
> They don't have to die first. The company can avoid the expense by planning how not to kill people.<p>This is an extremely optimistic view on how companies work
This doc from 1999 has an answer: <a href="https://www.youtube.com/watch?v=SiB8GVMNJkE" rel="nofollow">https://www.youtube.com/watch?v=SiB8GVMNJkE</a>
Usually its capitalism, because in America, they can just buy carveouts after the fact.
We cannot even properly ban asbestos, expecting people to die first is just having a realistic perspective on how the US government works WRT regulations.
> <i>if company sells self driving, they are also taking full liability for any damages of this system</i><p>This is basically what we have (for reasonable definitions of full).
That's a legal non-starter for all car companies. They would be made liable for <i>every</i> car incident where self-driving vehicles were spotted in close vicinity, independently of the suit being legit. A complete nightmare and totally unrelated to the tech. Makes would spend more time and tech clearing their asses in court than building safe cars.
What "extra GPU"?<p>LIDAR is also straight up worthless without an unholy machine learning pipeline to massage its raw data into actual decisions.<p>Self-driving is an AI problem, not a sensor problem - you aren't getting away from AI no matter what you do.
Depends on the specific lidar model. It seems that there's a wide range of lidar prices and capabilities and it's hard to find pricing info.
^ this, the article is quoting LIDAR price ($25K) from years ago.
It wasn't ill timed. Any sane leader would understand both size and cost of tech always comes down rather quickly over time. He's just refused to accept having lidar uglify his cars or wait for it to get smaller. He instead fabricates about humans don't have lidars so cars shouldn't have them and sold "no lidars on Teslas" as an advantage instead of the opposite and refuses to accept the truth due to needing to feed his ego. Firing all non-yesmen didn't help either.
Could it also be about the looks? Waymo has a rather industrial look, with so many LiDARs, and the roof turret.
> choose vision over lidar<p>I mean, you have to have vision to drive. What are you getting at? You can't have a lidar only autonomous vehicle.
<p><pre><code> >Lidars come down in price ~40x.
</code></pre>
Is that really true? Extraordinary claims require extraordinary proof.<p>Ars cites this <i>China Daily</i> article[0], which gives no specifics and simply states:<p><pre><code> >A LiDAR unit, for instance, used to cost 30,000 yuan (about $4,100), but now it costs only around 1,000 yuan (about $138) — a dramatic decrease, said Li.
</code></pre>
How good are these $138 LiDARs? Who knows, because this article gives no information.<p>This article[1] from around the same time gives more specifics, listing under "1000 yuan LiDARs" the RoboSense MX, Hesai Technology ATX, Zvision Technologies ZVISION EZ5, and the VanJee Technology WLR-760.<p>The RoboSense MX is selling for $2,000-3,000, so it's not exactly $138. It was going to be added to XPENG cars, before they switched away from LiDAR. Yikes.<p>The ATX is $1400, the EZ5 isn't available, and the WLR-760 is $3500. So the press release claims of sub-$200 never really materialized.<p>Furthermore, all of these are low beam count LiDARs with a limited FOV. These are 120°x20°, whereas Waymo sensors cover 360°x95° (and it still needs 4 of them).<p>It seems my initial skepticism was well placed.<p><pre><code> >if you had to choose one or the other for cost you’d be insane to choose vision over lidar
</code></pre>
Good luck with that. LiDAR can't read signs.<p>[0] <a href="https://global.chinadaily.com.cn/a/202503/06/WS67c92b5ca310c240449d90b4.html" rel="nofollow">https://global.chinadaily.com.cn/a/202503/06/WS67c92b5ca310c...</a><p>[1] <a href="https://finance.yahoo.com/news/china-beijing-international-automotive-sensor-082300183.html" rel="nofollow">https://finance.yahoo.com/news/china-beijing-international-a...</a>
I hate the guy, but I get the decision. A point cloud has a ceiling that the visible spectrum doesn’t, evidenced by our lack of lidar.
Yes lidar has limitations, but so does machine vision. That’s why you want both if you can have it. LIDAR is more reliable at judging distance than stereo vision. Stereo vision requires there to be sufficient texture (features) to work. It can be thrown off by fog or glare. A white semi trailer can be a degenerate case. It can be fooled by optical illusions.<p>Yes, humans don’t have built in lidar. But humans do use tools to augment their capabilities. The car itself is one example. Birds don’t have jet engines, props, or rotors… should we not use those?
It's because stereo vision is "cheap" to implement, not because theoretical biological lidar has a "ceiling".
Can lidar say what colour is traffic light?
It’s not either lifar or regular cameras. Use both and combine the information to exceed the humans
I believe traffic lights currently use three bulbs, red, yellow and green. Even without color a computer system can easily determine when each light is lit.<p>If there are single bulbs displaying red, green and yellow please give clear examples.
Flashing lights over rural intersections often do that. There is only one color there (yellow or red), but position is not a signal
Have you driven in America? We have the craziest lights you've ever seen. And that's just in my state
How about turn signal vs brake lights?
Something I've seen noises about is time of flight systems for traffic. I think the idea is you can put those systems on traffic lights, cars, bicycles, and pedestrians and then cars can know where those things are.
Show the cost differences and do the math then come back to us before you can suggest what decisions were ill timed. Otherwise it's just armchair engineering.
I'd love to take on this challenge: the article they linked shows the cost add for LIDAR (+$130) --<p>-- but I'm not sure how to get data on ex. how much Tesla is charged for a Nvidia whatever or what compute Waymo has --<p>My personal take is Waymo uses cameras too so maybe we have to assume the worst case, +full cost of lidar / +$130
Camera's are not the issue, they are dirt cheap. Its the amount of progressing power to combine that output. You can put 360 degree camera's on your car like BYD does, and have Lidar. But you simply use the lidar for the heavy lifting, and use a more lighter model for basic image recognition like: lines on the road/speed plates/etc ...<p>The problem with Tesla is, that they need to combine the outputs of those camera's into a 3d view, what takes a LOT more processing power to judge distances. As in needing more heavy models > more GPU power, more memory needed etc. And still has issues like a low handing sun + white truck = lets ram into that because we do not see it.<p>And the more edge cases you try to filter out with cameras only setups, the more your GPU power needs increase! As a programmer, you can make something darn efficient but its those edge cases that can really hurt your programs efficiency. And its not uncommon to get 5 to 10x performance drops, ... Now imagine that with LLM image recognition models.<p>Tesla's camera only approach works great ... under ideal situations. The issue is those edge cases and not ideal situations. Lidar deals with a ton of edge cases and removes a lot of the progressing needed for ideal situations.
<p><pre><code> >I'd love to take on this challenge: the article they linked shows the cost add for LIDAR (+$130)
</code></pre>
The article <i>claims</i> that, but when you actually try to follow the source it fails the fact check.<p><a href="https://news.ycombinator.com/item?id=46583727">https://news.ycombinator.com/item?id=46583727</a>
Would be nice if you had been able to take it on, but as you say you don't have the data, so it's compared to nothing.
The issue isn't just the cost of the lidar units off the shelf. You have to install the sensors on the car. Modifications like that at the scale that Waymo does them (they still have less than 10K cars) are not automated and probably cost almost as much as the price of the car itself. BYD is getting around this by including them in a mass produced car, so their cost per unit is closer to the $130 off the shelf price. This is the winning combination IMO.
Tesla uses their own chips. Chips which you can’t skip by using lidar because you still need to make decisions based on vision. A sparse distance cloud is not enough