Quote:
Originally Posted by churchx
Indeed, i just empirically guessed how it works...
|
I'm far from an expert, but my understanding is that O2 sensors read the partial pressure of oxygen in the exhaust stream, when compared to a reference air sample.
The partial pressure gets translated into a metric of combustion completeness (lambda - the normalized AFR).
The ECU begins with a target ideal AFR and "learns" the best AFR for your fuel, based on a recursive algorithm. Although you want to be near complete combustion, most ECUs try to cycle between lean and rich operation to keep the catalyst operating efficiently. The ECU spits out a target AFR at this point.
By combining the learned AFR of your fuel and the measured lambda, you get an estimate of actual AFR. The ECU compares the actual and target AFRs, which is useful feedback for the next combustion event. Correction is usually on the injector pulsewidth (since it's fast), but you can adjust the throttle or VVTI angles to manipulate the air flow (LOAD).
Regarding catalysts, the catalyst flow, health, and efficiency are modelled. Flow is simple enough, it's a fixed volume, with some flow restriction. 1st order system w. small time delay in the sensors. From a health and efficiency perspective, they estimate temperature and oxygen storage states. If you're actively storing and releasing large amounts of oxygen, the catalyst can be 90%+ efficient.
With an efficiency estimate and a time estimate... you can follow a volume of exhaust from the UEGO (upstream linear O2 sensor) to the HEGO (downstream switching-type O2). The metric for catalyst health isn't something I know much about... but going through the readiness cycle (lots of different driving situations) will help to guarantee that the models used to generate the health metric are accurate.