Announcement

Collapse
No announcement yet.

Lion Air 737-Max missing, presumed down in the sea near CGK (Jakarta)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 3WE
    replied
    Originally posted by ATLcrew View Post
    Never mind that, I have it on good authority that Sully actually likes decaf, which is an unforgivable sin for an aviator of his stature.
    Serious evidence in the decline of traditional fundamentals. It USED to be about smoker vs. non-smoker pilots. THEN women vs. men. Now caffeinated vs. decafinated...Clearly the non-smoker, less-masculine, decaf types need their FBW and auto trim systems!

    Leave a comment:


  • ATLcrew
    replied
    Originally posted by Black Ram View Post
    But if you ask me, his mistake is mostly his comments, which disappointed me a lot.
    Never mind that, I have it on good authority that Sully actually likes decaf, which is an unforgivable sin for an aviator of his stature.

    Leave a comment:


  • Evan
    replied
    Originally posted by Black Ram View Post
    "That was a little-known part of the software that no airline operators or pilots knew about."
    Oh jesus, it's worse out there than I thought.

    Leave a comment:


  • Evan
    replied
    Originally posted by Black Ram View Post
    It was mentioned somewhere that a system that acts abnormally after a single-point failure should not have been certified. I'm no expert here and I can't say if this is true or not, but this MCAS set-up definitely lacks redundancy. And you don't need four AoA vanes, you need at least three.
    During a single-point failure:

    You need two AoA vanes to be fail-safe (system detects disagreement and deactivates).
    You need three AoA vanes to be fail-operational (system detects disagreement, votes and continues to operate if two sensors are in agreement).

    Leave a comment:


  • Black Ram
    replied
    Originally posted by Schwartz View Post
    Automated systems will always act on a false indication, that is guaranteed. If there were four sensors, then the automated system could be designed to discard a single non-agreeing value. If there are only two, the system will make a choice and inform the pilots of the discrepancy, which is exactly what it did from what I can tell. We know this approach works, because the previous flight had exactly the same problem and performed exactly as they were trained to do, and we don't see these planes falling out of the sky because I guarantee you this won't be the first time one of these sensors malfunctioned.

    It was mentioned somewhere that a system that acts abnormally after a single-point failure should not have been certified. I'm no expert here and I can't say if this is true or not, but this MCAS set-up definitely lacks redundancy. And you don't need four AoA vanes, you need at least three.

    I'll also admit I haven't read the official report, but I have read a lot of comments, and also this excellent summary (though I feel the yoke breakout theory is speculative for now):
    https://leehamnews.com/2018/11/28/in...r-crash-report

    Here is what I can tell, which may be wrong of course:

    - No, there was no system notification for the AoA discrepancy. The flags from the different flights with this airframe presented to the crews were: IAS DISAGREE, ALT DISAGREE, SPEED TRIM FAIL, MACH TRIM FAIL. There is no mention of anything about AoA, except for an automatic FCC log, which was for maintenance

    - I don't think the previous crew were trained to deal with this problem, but just got very lucky. On that flight, the Captain deemed his IAS unreliable, so he gave control to the F/O. Then the captain was free to make observations, and saw the automatic ND trimming, after which seemingly by chance he happened to figure out that setting the trim to CUTOUT alleviated the problem. Note that he then resets the master trim switch back to normal, and the MCAS kicked in again, after which the Captain set it back to CUTOUT. Why would he do that if he was trained for the procedure, or if he even had a clue what exactly was going on? It certainly does not seem like following any training, but more like improvising.

    - The previous crew never mentioned anything about ND trimming or MCAS.....probably because they were unaware of it, and because they did not understand what had happened to them. They thought it was something to do with the STS, hence they logged "STS going the wrong way"

    - As some pilots have said before this report, the MCAS ND trim is not exactly a classic case of runaway trim - first, it's incremental; second, it's not uncommanded per se; and finally and most importantly - the trim switches on the yoke did counteract the ND trim.

    - I do agree that human factors are involved, it's more than obvious. I'm just not sure whether the pilots really messed up bad, or they were just presented with a really difficult situation on a new airplane, without the information they needed.
    Remember - something as simple as keeping the flaps at 5 would have prevented the crash, but they were unaware. On the accident flight, it was the Captain who happened to be flying the airplane, and he was dealing with the stick-shaker, yoke forces, etc - sounds stressful. I'm still trying to figure out if he was also flying pitch and power, in which case the workload would have been pretty heavy.

    - For now, there is no reason to suggest maintenance did not follow the OEM procedures.

    - These planes do not fall out of the sky, but the MAX is a new plane, and MCAS is a new system. Certainly, the lives lost on JT610 will be the price we pay to make sure this plane does not fall out of the sky like that ever again. But it is very sad, and a completely unnecessary loss of life, which could have been easily prevented.




    Originally posted by Schwartz View Post
    At the risk of going way off topic... what mistake was that exactly?
    Yes, it is off-topic, but you can read what Sully says here: https://edition.cnn.com/travel/artic...nes/index.html

    Computer-assisted flight systems were active, Sullenberger said, but there was no need for them.
    "We never got to the extremes where [flight control computers] would have protected us" from pointing the plane's nose too high, or going too fast or too slow, he told CNN last week. "We didn't need any of it."
    In fact, flight control computers actually hindered the landing, said Sullenberger, who's now a CBS News aviation and safety consultant. Flight software prevented him from keeping the plane's nose a little higher during the last four seconds before he ditched US Airways Flight 1549 in the icy Hudson River.
    "So we hit harder than we would have, had we been able to keep the nose up," he said. "That was a little-known part of the software that no airline operators or pilots knew about."
    While the NTSB report clearly states they were in Alpha Protection mode. So the short answer is, Sully made the mistake of pulling the nose too much, which was inhibited by the FBW control protections. He also made the mistake of thinking that raising the nose more would have made the landing better, while the automation gave him the best performance attainable for the conditions, but which meant the nose wasn't going any higher. So what does that tell me? It tells me that under certain conditions, humans are humans and it doesn't matter that much if you are Captain Sully or Captain Asseline. And no, I'm in no way comparing these two. Sully still did his job really well and is very much responsible for the miraculous outcome on the Hudson. But the plane also did its part, after Sully was trying to make a mistake, which could have made things catastrophic in the very last seconds. But if you ask me, his mistake is mostly his comments, which disappointed me a lot.

    Leave a comment:


  • Evan
    replied
    Originally posted by Schwartz View Post
    This may be semantics, but deciding to ignore something is taking an action. Deciding to report a mismatch is an action. Automated systems will always act on inputs they have access to and that is the limit of the scope of what they can operate on. Unlike a human in the cockpit. In these conversations, I get the feeling people are tending to treat these automated systems like people, or complex multi-layer AI's which is not the case.

    You can build a 10x more complicated decision algorithm based on 2 inputs, or more inputs and you will still have a scenario (harder to figure out if more complex) where there is a malfunction and conflict that will lead to the wrong action being taken. This is unavoidable in automated systems with limited inputs and static algorithms (which are mostly required to be predictable). As Gabriel has stated several times, there are other reasons to cause a plane to have runaway trim, and this should be a standard training scenario for every pilot. I really think the focus is on the wrong thing here -- the design of the MCAS -- and the focus should be on why the pilots didn't know what to do with a very flyable plane with a very predictable problem: an incorrect sensor reading.
    I think the focus has to be on both things. We have a system that is INSANE** and a crew that didn't take appropriate actions to shut it down.

    **If, in fact, it wasn't designed to require both sensors in agreement (within margin or error) to operate. I say this because one of my suspicions is that this might be another situation like the original autothrottle logic in the NG's. That logic, which partially caused the crash of Turkish Flt# 1951, involved a comparator circuit that was DESIGNED to disengage the autothrottle if both radar altimeters were not in agreement. But, it didn't reliably do this. There were at least 12 instances of the autothrottle remaining in operation while using reference from a single, faulty radalt prior to the Turkish Airlines crash. The autothrottle problem was known to Boeing long before that. They replaced the unit on new build aircraft in 2003 and issued a recommendation to operators of existing aircraft to retrofit. There was no AD requirement for airworthiness. Even after the fatal crash, I don't think there ever was one.

    Meanwhile, Airbus was required to supply a third array of air and inertial data, in which two arrays must always be in agreement, for their FBW systems to be certified. Both the Airbus A320neo and its direct competitor, the Boeing 737-MAX are using air data to introduce uncommanded flight surface movements, yet only Airbus is required to have this added layer of validity.

    As to semantics, I interpret an action here as anything that moves a flight control surface and affects or upsets flight stability. A properly designed automated system doesn't do that using a single air data source.

    Leave a comment:


  • Schwartz
    replied
    Originally posted by Evan View Post
    What?! Properly designed (and properly certified) systems that make automated flight control decisions do not act on a single, false indication. There are some very good questions posed by Simon Hradecky on avherald.com that Boeing should answer, including:
    This may be semantics, but deciding to ignore something is taking an action. Deciding to report a mismatch is an action. Automated systems will always act on inputs they have access to and that is the limit of the scope of what they can operate on. Unlike a human in the cockpit. In these conversations, I get the feeling people are tending to treat these automated systems like people, or complex multi-layer AI's which is not the case.

    You can build a 10x more complicated decision algorithm based on 2 inputs, or more inputs and you will still have a scenario (harder to figure out if more complex) where there is a malfunction and conflict that will lead to the wrong action being taken. This is unavoidable in automated systems with limited inputs and static algorithms (which are mostly required to be predictable). As Gabriel has stated several times, there are other reasons to cause a plane to have runaway trim, and this should be a standard training scenario for every pilot. I really think the focus is on the wrong thing here -- the design of the MCAS -- and the focus should be on why the pilots didn't know what to do with a very flyable plane with a very predictable problem: an incorrect sensor reading.

    He let the airspeed fall below green-dot (he insisted he stayed on it) and when he tried to flare before 'splshdown' the stall protection thus limited his command, causing the airplane to hit at a higher than desired vertical speed which resulted in damage to the lower fuselage and water ingress. If the stall protection hadn't prevented his command, he might have floated longer and greased it down intact, or he might have stalled, dropped a wing and cartwheeled it. It's a pointless argument. He saved the day, possibly with a little help from Hal.
    Ah, thank you.

    Leave a comment:


  • Evan
    replied
    Originally posted by Schwartz View Post
    Automated systems will always act on a false indication, that is guaranteed.
    What?! Properly designed (and properly certified) systems that make automated flight control decisions do not act on a single, false indication. There are some very good questions posed by Simon Hradecky on avherald.com that Boeing should answer, including:

    Originally posted by avherald.com
    - Why was the MCAS permitted to operate on the base of a single AoA value showing too high angle of attacks? Why does the MCAS not consider the other AoA value?

    - what should the system response have been in case the AoA values disagree? How would the systems determine which value is plausible and which is erroneous? Is there any such check at all? Would MCAS not need to be prohibited if left and right AoA disagree?

    - What is the reasoning behind the certification permitting to allow a system modify the aircraft's equilibrium (via trim) in manual flight in a way that the trim could run to the mechanical stop and thus overpower the elevator?

    - Was the AoA input to the MCAS (or in general) ever being cross checked, e.g. by taking into account altitude, IAS, vertical speed to compute TAS via altitude, density and IAS and the angle of the airflow by computing the angle of the flight trajectory with TAS and vertical speed? Could such an crosschecking algorithm not even detect if two or more AoA sensors were frozen/faulty?
    At the risk of going way off topic... what mistake was that exactly?
    He let the airspeed fall below green-dot (he insisted he stayed on it) and when he tried to flare before 'splshdown' the stall protection thus limited his command, causing the airplane to hit at a higher than desired vertical speed which resulted in damage to the lower fuselage and water ingress. If the stall protection hadn't prevented his command, he might have floated longer and greased it down intact, or he might have stalled, dropped a wing and cartwheeled it. It's a pointless argument. He saved the day, possibly with a little help from Hal.

    Leave a comment:


  • TeeVee
    replied
    Originally posted by Evan View Post
    Now this is interesting. Are you saying professional pilots like yourself are unaware of certain 'problem' airlines in 'problem' nations that one might want to avoid flying for?
    aside from being his coy self, maybe he's playing the blue wall in the sky and doesn't want to call out anyone in the industry.

    Leave a comment:


  • Schwartz
    replied
    Originally posted by Black Ram View Post
    Like I said before, even if this plane was sorted out before its fateful flight, or if that flight never took place, the fact remains that the plane's automation acted out on the false indications of only one AoA vane, and that pilots flying this plane did not know about the plane's automation. This crash could have happened on any other given time. What's to say this exact failure can't occur spontaneously when the plane is in the air?
    Automated systems will always act on a false indication, that is guaranteed. If there were four sensors, then the automated system could be designed to discard a single non-agreeing value. If there are only two, the system will make a choice and inform the pilots of the discrepancy, which is exactly what it did from what I can tell. We know this approach works, because the previous flight had exactly the same problem and performed exactly as they were trained to do, and we don't see these planes falling out of the sky because I guarantee you this won't be the first time one of these sensors malfunctioned.

    Sulley also made a mistake, and worse, he denies it to this day, instead blaming the plane and making up facts, contradicting the NTSB report.
    At the risk of going way off topic... what mistake was that exactly?

    Leave a comment:


  • Evan
    replied
    Originally posted by ATLcrew View Post
    It is precisely because I'm a line pilot that the answer is NOT so well-known to me.
    Now this is interesting. Are you saying professional pilots like yourself are unaware of certain 'problem' airlines in 'problem' nations that one might want to avoid flying for?

    Leave a comment:


  • ATLcrew
    replied
    Originally posted by Evan View Post
    Yes, yet again.

    If you really are a line pilot, the answer should be well-known to you...
    It is precisely because I'm a line pilot that the answer is NOT so well-known to me. Even less-known to me is the reason why you, who claim to know the answer, are so reticent to articulate it.

    Leave a comment:


  • ATLcrew
    replied
    Originally posted by orangehuggy View Post
    There should be a "flight test requested before signoff" box for the pilot to tick on the maintenance log.
    I don't know that one necessarily needs a box for that. I can just write such a "request" in the discrepancy description itself, but that doesn't mean it will be honored. Now, there are certain maintenance procedures that REQUIRE a subsequent flight test, but that's not up to line crews, that's up to each carrier's AMM/GMM. At my airline those flights are done by instructors and chief pilots, there is no procedure for line crews to even volunteer for that sort of stuff.

    Leave a comment:


  • Gabriel
    replied
    Originally posted by Evan View Post
    Would that include the pitch trim issues?
    I don't know, but I suspect it is a system to report incidents to be analyzed and perhaps informed to the regulator. But any technical issue with the plane must be logged in the airplane's technical log, otherwise it can be dispatched. As you see maint took action only in the things reported in the log, whatever else they reported in a company web page or whatever would not have and did not have any immediate effect.

    Leave a comment:


  • BoeingBobby
    replied
    Originally posted by Evan View Post
    Would that include the pitch trim issues?



    Because the problem hadn't been identified. There was probably nothing wrong with the ADM's. AFAIK, an incorrect AoA value will give an incorrect onside IAS and ALT value as well.

    When they say 'test on ground'. I'd like to know what that means. A full test of air data sensors and readings should have revealed the AoA data discrepency. I think any healthy maintenance culture would have checked them all before signing off on it.
    What the hell is A-SHORE? Maintenance can do all kinds of tests on electronics in the aircraft with the center FMS unit.

    Leave a comment:

Working...
X