Totally Off Topic - Comma 3 Open Source Driving System Superior to Tesla

I understand this is totally off topic. But I have a feeling a good number of those interested in Mac hardware may be interested.


This is my new car I bought this month.

At first glance it is a boring vanilla Ford Explorer - a classic utilitarian family car. But then look at the “Comma 3” installed below the rear review mirror which runs OpenPilot software.

I was a bit skeptical when I read about a tiny company writing open source software for driver assist that runs on the equivalent of a $1250 Android phone. Not only are reviews stellar, but also Consumer Reports rated them as superior to every manufacturer’s driver assist system, including Tesla.

I am just getting started but initially I am impressed and agree with the reviews. My basic thoughts are:

(1) Anyone can probably install the hardware, but there is a risk of damaging the clips holding the trim to parts of your interior. A local custom auto electronics shop installed it for me for $150; I think that’s wise.

(2) While you do not need to be a Python developer to set up the software, it helps if you are comfortable with Github and getting technical advice from a Wiki or Discourse. In my case the public release of the software did not have the “fingerprint” for my car’s particular trim level so I had to create a custom fork for myself and then submit a pull request for OpenPilot to add my car model to the main branch. [If you are not into stuff like that - finding a local high school or college student familiar with open source software who can help you should not be too hard.]

(3) OpenPilot’s general approach is pretty conservative and safety-minded. The stock software is more conservative than Ford’s own lane-keeping system in most respects; you can tweak it to more capability step by step as you are comfortable.

(4) The biggest current complaint Tesla owners have is mandatory “steering wheel nag.” If you don’t keep your hands on the wheel, it alerts you to put your hands there. OpenPilot instead uses a camera to confirm you are looking at the road. The only time it insists you put your hands on the wheel is when there is a steep turn beyond the software’s torque limits.

(5) Tesla’s system disengages if you turn the wheel too much while on Autopilot. OpenPilot instead encourages “collaborative” steering. If there are parked cars and you want to move the car over a few feet, you can do so with the wheel and then let go and OpenPilot will keep the car in its new position. If you want to direct it at a fork in the road, you can take over steering and then when you let go it will continue on the current road.

Overall I am quite pleased though again it’s quite new to me. The out of the box features are likely sufficient for 99% off drivers, but if you want to customize or build something new the software is all there to tweak or for you to pay a bounty to other developers to write - with appropriate tools to test and implement it in a safe way.

They are adding support regularly for more cars - it supports about 250 cars currently as shown on their website. Most of the offerings are fairly utilitarian - but the OpenPilot upgrade is a gamechanging addition.

https://data.consumerreports.org/wp-content/uploads/2020/11/consumer-reports-active-driving-assistance-systems-november-16-2020.pdf

1 Like

I can agree from the safety standpoint, but more so I’m wondering who takes responsibility when something goes wrong. I mean…“install at your own risk” is fine for computers. In cars, it’s more “install at other peoples’ risk.”

All that said, I do very much like the idea of there being a competitive ecosystem for car software. I’m just not sure that people downloading things from Github is a good thing when it comes to auto safety. :slight_smile:

My understanding is that NHTSA did try to regulate an early version of the product in 2016; in response the company open-sourced the software and switched to a model of selling only the hardware with no software installed. Apparently that removes the company from NHTSA regulation.

As for the “steering wheel nag” - we could reduce drunk driving deaths a whole lot if we mandated ignition-cutoff breathyzers in all cars. But as a society we choose instead to emphasize individual responsibility.

If you look through the comma.ai wiki and Github discussions, you will see an intense focus on safety and generally conservative decisions in design and release of updates.

There have been no deaths with comma.ai in about 100 million miles of use among about 10,000 users. My guess is that the fact that it takes somewhat of a technical effort to figure out how to install the hardware and especially the software weeds out a good number of potential customers who would otherwise try to use it in a stupid-driver-tricks manner; I see youtubes of Tesla drivers reading or sleeping while driving but I do not see that for comma.ai and OpenPIlot.

Car owner and driver

At least in the United States, historically-speaking that answer doesn’t usually hold up. A couple of accidents with cars like this and I predict one of a few things will happen:

  • The main developer of the open source software will get sued
  • The auto maker will get sued for building a system that can be “hacked”
  • Legislators will ban the installation of custom software
  • Insurance companies will refuse to insure drivers with custom software

And I would say that which one depends on the prevalence and size of the accidents involved.

Again, I’m for a robust third-party software market. I just don’t think that “you download something from Github and all responsibility for problems falls on you” is something that will fly in our modern society.

Do you feel the same way about customizing a car with non-standard wheel/tire sizes? Replacing an auto engine with one of more power? Customizing automobile suspension or steering design?

Car modification is quite an industry - should we treat them all the same, or should we treat custom software different?

Should the standard be “a couple of accidents”? Or should the standard be an accident rate with OpenPilot which is meaningfully higher than the accident rate for non-modified cars?

What if the accident statistics wind up showing that cars with OpenPilot have fewer accidents than stock cars? Should insurers still be permitted to surcharge or decline cars with such a “safety” feature?

I agree with you that it is complex - but it cuts both ways.

Keep in mind also that no government body regulates/supervises each software modification that Tesla makes to its Autopilot system. Thus what makes OpenPilot’s software edits intrinsically more dangerous than Tesla’s?

Tesla stated in April 2023 they had 150 million miles driven with FSD beta - there are many stupid-driiver-tricks videos using if and there have been quite a few reports of fatal accidents using. it. Tesla Surpasses 150 Million Miles Driven with FSD Beta

Https://comma.ai reports over 100 million miles driven with their system. No fatal accidents have been reported. Car and Driver, Consumer Repots, Road Show, The Drive, and others all gave very favorable reviews of comma.ai and OpenPilot.

Sure anyone can sue anyone - but I doubt such a suit would be succesful given these facts.

Why should/would legislators ban software with such a good track record? If they did, should they also ban custom tires/steering/suspension/brakes and all the rest of the auto modification industry?

I was going to chime in with something like this - do the “Self-Driving”/Driver Assist features on cars actually receive any regulatory scrutiny? I was under the impression that they are new enough that there isn’t really a regulatory framework, so it’s a bit of a wild west. If that’s the case, open-source software with a large community may well be better than closed-source proprietary software.

There’s a reason open-source software runs most of the web - not only it cheaper, it is as good as or better than commercial alternatives. I could easily see the same thing happening for driver assist systems.

I would argue that it is different. Fundamentally. If I wanted to 3D print my own engine in a garage, that would be one heck of an undertaking - but it would mostly be affecting whether or not the vehicle moves down the road. And if one of those aftermarket products fails catastrophically, the manufacturer of said aftermarket product is at least theoretically on the hook.

Software that actually makes decisions about how to actually drive the vehicle with a “user assumes all responsibility of injury or death” type disclaimer would be a novum, not more of the status quo.

Nothing makes it inherently more dangerous. But Tesla doesn’t post their software to a website and say “use this at your own risk, and also feel free to tinker around with it however you want.” They’re posting a finished product, and are at least theoretically liable if their software fails and causes a problem. If the government wants to establish a standard, there’s a fixed company for them to hold liable.

“Use at your own risk” effectively makes the customer the software company. And I would guess that for the average consumer, the cost of defending a lawsuit like that would be positively insane. Hence my fourth point above. Toss a turbocharger on your engine, and your insurance rates skyrocket. I would expect something similar if open source auto software becomes a common thing.

I don’t think Tesla agrees there. Tesla’s argument is that no matter what their software does or does not do, the driver is responsible and should react accordingly.

Indeed I think Tesla would be correct if they called their system “Driver Assist”. But I think calling it “Full Self-Drive” is where Tesla’s liability may begin.

With regard to comma.ai and OpenPilot, even though it is open source, it is not actually a Wild West of software. They have actually done an impressive job creating safety rules which cannot be breached by software edits.

This is a brief excerpt of their safety protocol:

  • Upon stepping on either pedal or pressing the cancel button, the panda safety code shall not allow any non-zero gas, brake or steer command until the user presses the engage button again.
  • openpilot shall immediately cancel when the driver’s seat-belt is unlatched or the driver’s door is open. This is to prevent the driver leaving the vehicle while openpilot is still engaged.
  • Max accel and decel injections: the maximum acceleration and deceleration of the vehicle shall be limited between 2m/s² and -3.5m/s², respectively¹. These limits are a more conservative than what’s recommended in ISO 22179.
  • Max steer injection: in the case of a completely erroneous steering command, the driver shall have at least 1 second of reaction time before the car significantly deviates from its original path.
  • Steer loss: while in a turn, in the case of an immediate loss of steering torque, the driver shall have at least 1 second of reaction time before the car significantly deviates from the desired path (e.g. lane lines crossing).
  • Steer injection against driver’s desire: the driver shall be capable of overriding openpilot’s steering strength with minimal effort: less than 3 Nm of extra torque applied at the steering wheel shall be necessary to correct openpilot’s steering control. The specific implementation of those requirements depends on the specific car control APIs, as explained here.

More details here for those interested:

Neat; thanks for posting. How exactly is it an improvement over your car’s native system? Is the native system just adaptive cruise and lane centering? I’ve lightly followed Comma but never seen it in action.

Reminds me of the diabetes world. There is open source software that integrates monitors and pumps to make fully automated ‘artificial pancreas’ systems. The hardware is off-the-shelf (Android phone and a couple accessories.) This means neither hardware nor software can be regulated by the FDA. The pump and CGM makers aren’t required to shut out third parties either. It’s life-extending for diabetics to access these tools but not everyone is happy about it.

In Comma’s case, technically falling under the category of dashcam is what lets them continue selling commercial hardware. NHTSA wouldn’t restrict dashcams from the camera angles and sensor quality that Comma installs, or prevent cables from being connected. Same for restricting CANBUS access, which would be hard for disabled drivers and fleets.

2 Likes

Interesting observation - Comma only supports cars which have stock adaptive cruise control; they install no additional sensors. One of their design disagreements with Tesla is that Tesla’s emphasis on Lidar and other sensors is unnecessary. If humans have been driving for 100 years based on visual information, why can’t Comma do that too [assisted by manufacturer’s radar for adaptive cruise to detect speed of the lead car].

Yes - The car’s native system includes adaptive cruise and lane centering. Depending on which software mode/features I choose, improvements include:

(1) No steering wheel nag when hands-off - Comma has a camera which verifies the driver is alert and watching the road - so there is no annoyance to keep hands on the wheel

(2) Collaborative Steering - Tesla and many other manufacturer systems only permit limited driver steering input while driver assist is enabled; if you steer too much the system is disabled. Comma instead encourages driver input and uses that to adjust the driving in real-time.

So if I Comma is enabled and there are parked cars or construction vehicles uncomfortably close when the car is centered in lane, I can steer a couple feet off-center; Comma takes that as the new preference at the moment, and it does NOT disable the system.

This is the single biggest improvement in my view because there is no constant cognitive friction remembering if the system is on or off and thus how hard to hold or not hold the wheel. The system remains on almost always - and no matter whether it is on or off, I am welcome to tweak the steering if it is not to my liking. It makes the driver-assist second nature with no discouragement/friction to intervene.

(3) Warning at tight turns - since it does not give “nag” warnings to put hands on the wheel, it means that I do take more seriously when it does ask me to put hands on the wheel. That happens when it is going around a sharp turn and the software limits on steering torque are too low to make the turn as sharp as the system knows is needed.

(4) Optional AI speed adjustment - In regular mode without a car in front, the system drives at the max speed set in the adaptive cruise system. In experimental mode, the system may choose a lower speed if it perceives a human would typically drive slower. Typically that means it slows down on tight turns so it is able to make the turns without driver intervention.

(5) Optional Stop Sign and Red Light Detection - this is a work in progress and seems a bit dependent on weather /daylight conditions.

(6) Dashcam - Drives can be automatically recorded and uploaded to the cloud.

(7) Optional point to point navigation (“self-drive”) - this is the most experimental of the optional features. Though to many it’s the holy grail of “self-driving” software, to me the Collaborative Steering (# 2) significantly reduces the need for auto-navigation. I don’t expect nor need my car to automatically exit the highway, make unprotected left turns, enter traffic circle, etc. I’m happy if my role is steering at the intersections while the system remains engaged and then it resumes lane-centering once I have established the car in the new road/lane.

1 Like

I wouldn’t be surprised if the standard becomes “toss an aftermarket self-driving system, (commercial or not), and your insurance rates skyrocket.” I’m not yet convinced it’s the correct move, but at least it covers liability.

These two quotes taken together are kind of interesting. IIRC one of the main arguments for self-driving is that it computers should be able to drive better than humans do, without emotion, lapses in judgement, or errors that humans make. So on one hand, maybe Tesla is right and there should be LiDAR, at least for full self-driving, because LiDAR will provide data that humans may not otherwise be able to reliably gather, and that data can be used to make smarter driving decisions.

The other side os that if it’s a driver assistant, (not FSD), maybe the floor of how good it has to be is simply that the combination of the driver + assistant should not be worse than the driver alone. In that case the bar is not high and there is plenty of room for experimentation.

I hear you. I’m not saying you should be okay with unregulated software. Just observing the tradeoff in a sphere with which I’m more familiar. Even though the FDA deals with reactions that occur within individual bodies, because some disease and treatments are so pervasive, both agencies deal with the health of large populations.

Actually it could impact your life expectancy notably if an insulin pump malfunctions and a driver passes out from hypoglycemia.

But the likelihood of that occurring - even with a home-programmed pump - is so small that it is not something regulators focus on. A more meaningful predictor of risk would likely be obtained from a person’s overall medical record than by analyzing the computer code in someone’s insulin pump.

Is the risk the driving assistant system or the driver who uses the system irresponsibly by reading or sleeping or whatever else while using such a system?

While there have been numerous Tesla “Autopilot” or “Full Self-Drive” crashes, I do not believe a single one has occurred because the system took over control of the car and the driver was unable to regain control. Rather they have all occurred because drivers disregarded clear warnings that they need to monitor the car at all times.

I do not think anyone is seriously concerned that comma.ai will commandeer control of a car and thus cause an accident by preventing a conscientious driver from steering or stopping the vehicle.

It’s similar to carrying a cell phone in a car. There are many ways a cell phone can make driving not only convenient but also can literally save a life - from navigating to communicating in an emergency. If someone gets into an acccident due to texting-while-driving, is it the fault of the cell phone hardware/software or is it the fault of the driver who surely was warned of the risks of texting while driving?

I think insurers are smart enough to know that the true risk is the driver, not the driver-assistant system.

Insurance would no doubt get involved notably if we were talking about driving systems which encouraged drivers to sleep or otherwise delgate authority to the computer. That’s not the case today.

If a Tesla driver or comma.ai driver reads a book while driving then the cause of the subsequent crash is the driver, not the driving software.

Exactly correct.

Tesla marketing has unfortunately suggested they offer true “self-driving” software, but the driver manual makes it very clear it is only a driver assistant system.

Comma.ai and OpenPilot are extremely clear in multiple ways that it is only a driver-assistant system. They also make impressively serious efforts to address the risk that a driver could either intentionally or inadvertently write code that cause a driver to lose control of the car to its system.

Might I suggest you look at the documentation on how to port OpenPilot to a new car and a few of the discussions in GitHub and Discourse where devs working on OpenPilot software projects discuss and critique each other’s methods and software tools? I rarely see that level of sophistication even among builders of experimental aircraft - who are regulated by FAA.

Indeed I saw less sophisticated engineering analysis and quality assurance protocols when I took a certified aircraft that I owned to a licensed avionics shop for installation of FAA-certified electronics and related software.

This is a well-executed software project. And I say again - putting all that aside, there are design limits created by both the car manufacturer and comma.ai which preclude “hobbyists” from crossing engineering limits with regard to critical auto functions such as steering and braking.

In the end, there is no doubt that a Comma 3X running a custom OpenPilot fork might fail to operate completely or might do something completely different from what I intended. But I will always have a reasonable timeframe in which to respond, i.e. it might attempt a wrong turn but there is no chance it will suddenly make a U-turn on the interstate or stop short on the interstate at highway speed etc.

In short - take a deeper look. The failure modes are all benign unless the driver is literally asleep.

The “fact” is that their approach is legal, transparent, has an excellent safety record, and has been endorsed by Consumer Reports and many other industry publications which have reviewed them.

Even the open-source software design has significant safety benefits; not only their algorithms but also discussion and testing of possible new features, including results good and bad and tools to identify any safety issues in your own car, are out there for the public to see.

I don’t understand what’s scary about it being open source. Closed source and not subject to any safety testing/approval would be even scarier.

I agree that both the hardware and software should be subject to standardized safety testing and approval, but when the code for something like this is closed it’s like a bunch of secret ingredients no one outside the company can scrutinize.

Even if such a device passes functional testing and approval, it could still contain unobserved weaknesses in the software code that could later cause it to fail or be vulnerable to hacking by bad actors.

2 Likes

I actually agree. Open vs. closed source isn’t the issue.

The main issue for me is that this is completely unsupported software. From the Comma website, “Although there’s no official support for any software, it is comma’s policy to continue providing software updates to hardware for 1 year after it is the latest generation sold.” It’s not only not safety tested, it’s unsupported. They take no responsibility for it whatsoever.

1 Like

Good points - I would agree with you if the standard in the industry were for NHTSA or whatever other regulator to review and approve the high-level software code written and/or revised by manufacturers. But they do not do so. Instead the manufacturers establish low-level or firmware level safety precautions using industry-accepted concepts such as disabling software acceleration/braking when the driver presses the brake. And establishing torque limits on how much of a turn software can initiate.

As long as those low-level guardrails remain in place, no commands sent to the car’s communication bus will create a safety hazard as long as the driver is paying attention.

That is why Tesla is permitted to sell cars with “Beta” software. Whether the beta software “works” or not is between Tesla and its customers; NHTSA knows there are safeguards so that beta software will not hurt anyone as long as the driver is paying attention.

The same is true for comma.ai and OpenPilot; if you look at the policies in their Wiki/Website and in the pull requests on their GIthub, they enforce the low-level safety policies strictly. They even enforce them on publicly offered forks of their software and they will quickly ban users or developers who do not follow those rules.

So I see the “safety” of Tesla Autopilot/FSD vs OpenPilot to be essentially equivocal.

As for the comma.ai support policy - I view that as an economic risk I am taking, not a safety risk. Is it possible that Comma.ai goes out of business and I am left with hardware and software I can no longer maintain? Sure. If so - I am only out the $1250 I paid for it. I can quickly remove their hardware and software and the car will be back as it originally was.

2 Likes