Announcement

Collapse
No announcement yet.

the continuation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • the continuation

    Airbus has been testing new automated technology that it says has the potential to improve flight safety.

  • #2

    --- Judge what is said by the merits of what is said, not by the credentials of who said it. ---
    --- Defend what you say with arguments, not by imposing your credentials ---

    Comment


    • #3
      yes gabe but not for commercial

      Comment


      • #4
        I know that there are other perspectives, but from the technical perspective, achieving that feat in a modern twin jet airliner is not much different that doing in in a modern business jet. The computer doesn't know under which FAR the airplane is operating.

        --- Judge what is said by the merits of what is said, not by the credentials of who said it. ---
        --- Defend what you say with arguments, not by imposing your credentials ---

        Comment


        • #5
          Originally posted by Gabriel View Post
          The computer doesn't know under which FAR the airplane is operating.
          Not unless the crew discuss this in the pod bay and the computer can lip-read.

          AI, synthetic voice, assuming control… ok, NOW we can call it HAL. Remember that in 2001, the AI was so advanced that it developed a paranoid psychological complex.
          Do we need this in the cockpit? I don’t trust AI to remain logical and reliable. Especially if it is going to replace one of the pilots. And, make no mistake, that is the intention.

          Comment


          • #6
            Originally posted by Evan View Post

            Not unless the crew discuss this in the pod bay and the computer can lip-read.

            AI, synthetic voice, assuming control… ok, NOW we can call it HAL. Remember that in 2001, the AI was so advanced that it developed a paranoid psychological complex.
            Do we need this in the cockpit? I don’t trust AI to remain logical and reliable. Especially if it is going to replace one of the pilots. And, make no mistake, that is the intention.
            This is not AI. This is classic procedural-algorithmic computing. The "decisions" that the computer make are not based on finding patterns and figuring out what comes next. The "decisions" of these systems are all "if - then" type. With AI, you cannot simulate or predict what the response to a certain input will be. With these systems, you can do exactly that. While there may be some AI involved in analyzing the weather and such, the criteria to take over the plane and land is 100% pre-determined and 100% overridable. And the procedure for how to fly the plane, navigate to a suitable airport, an land, is 100% procedural too. No similarity with HAL. Cannot go paranoid. Cannot change its mood. Cannot consider the consequences of its decisions, nothing. It is just a more complicated version of 10 PRINT "HELLO WORLD", 20 GOTO 10.

            --- Judge what is said by the merits of what is said, not by the credentials of who said it. ---
            --- Defend what you say with arguments, not by imposing your credentials ---

            Comment


            • #7
              Originally posted by Gabriel View Post

              This is not AI. This is classic procedural-algorithmic computing. The "decisions" that the computer make are not based on finding patterns and figuring out what comes next. The "decisions" of these systems are all "if - then" type. With AI, you cannot simulate or predict what the response to a certain input will be. With these systems, you can do exactly that. While there may be some AI involved in analyzing the weather and such, the criteria to take over the plane and land is 100% pre-determined and 100% overridable. And the procedure for how to fly the plane, navigate to a suitable airport, an land, is 100% procedural too. No similarity with HAL. Cannot go paranoid. Cannot change its mood. Cannot consider the consequences of its decisions, nothing.
              Are we talking about Garmin or Project Dragonfly? I see the latter going in that direction. The AI will be available and tech just has to innovate, heedlessly, into the dystopian future, because if Company A doesn't exploit it, Company B, C, D, et. al. will bury them with it. The FAA needs to get ahead of this and keep AI out of the cockpit for about another half century until we better understand it.

              It is just a more complicated version of 10 PRINT "HELLO WORLD", 20 GOTO 10.
              Right, that is the insane sort of obsessive behavior computers will fixate on. I wrote a loop once in basic at the age of 12 on the high school Digital PDP8E. It proceeded to self-destruct the teletype machine. Paper feed everywhere, I think the carriage return bell fell off. They banned me. This was the beginning of a long, antagonistic relationship with computers.

              Comment


              • #8
                Originally posted by Evan View Post
                Are we talking about Garmin or Project Dragonfly? I see the latter going in that direction. The AI will be available and tech just has to innovate, heedlessly, into the dystopian future, because if Company A doesn't exploit it, Company B, C, D, et. al. will bury them with it. The FAA needs to get ahead of this and keep AI out of the cockpit for about another half century until we better understand it.
                The thing is that AI just doesn't make sense for this application. AI is a revolutionary tool today and is going to be amazing in the future, but it is not a one-fits-all tool. Things that can be well modeled are better handled with procedural coding. It is just more effective, testable, predictable and safer.

                Right, that is the insane sort of obsessive behavior computers will fixate on. I wrote a loop once in basic at the age of 12 on the high school Digital PDP8E. It proceeded to self-destruct the teletype machine. Paper feed everywhere, I think the carriage return bell fell off. They banned me. This was the beginning of a long, antagonistic relationship with computers.
                How many engineers reviewed your code? How much beta testing you did before releasing the code? How many times did you run your code in a simulated teletype machine trying all sort of normal and abnormal inputs to verify that its behavior was robust? Which release is the one that broke the telex? Let me guess: 0.0?

                Yes, badly written procedural code is dangerous. Badly designed and trained AI is even more dangerous.

                --- Judge what is said by the merits of what is said, not by the credentials of who said it. ---
                --- Defend what you say with arguments, not by imposing your credentials ---

                Comment


                • #9
                  Originally posted by Evan View Post

                  The FAA needs to get ahead of this and keep AI out of the cockpit for about another half century until we better understand it.
                  They're too busy writing regulations mandating small navy-sized fleets of rescue ships at any runway near any body of water much bigger than a puddle. Just kidding...or am I?

                  Comment


                  • #10
                    Originally posted by ATLcrew View Post

                    They're too busy writing regulations mandating small navy-sized fleets of rescue ships at any runway near any body of water much bigger than a puddle. Just kidding...or am I?
                    and figuring out how to secure the forward galleys...

                    Comment


                    • #11
                      Originally posted by Gabriel View Post

                      How many engineers reviewed your code? How much beta testing you did before releasing the code? How many times did you run your code in a simulated teletype machine trying all sort of normal and abnormal inputs to verify that its behavior was robust? Which release is the one that broke the telex? Let me guess: 0.0?
                      Lucky guess.

                      The thing is that AI just doesn't make sense for this application. AI is a revolutionary tool today and is going to be amazing in the future, but it is not a one-fits-all tool. Things that can be well modeled are better handled with procedural coding. It is just more effective, testable, predictable and safer.
                      You know I was half-serious, but the serious half is based on the idea that AI can make autonomous decisions and if it ever has to make life-or-death decisions that involve its own death (such as a ditching in the Hudson) I think it might be unpredictable. And I do wonder if the 2001 scenario is plausible once a machine achieves 'consciousness'. And I think that is where things are going.

                      So yes, I think the FAA needs to get ahead of this and limit the authority of AI until it is more understood.

                      Comment


                      • #12
                        Originally posted by Evan View Post
                        So yes, I think the FAA needs to get ahead of this and limit the authority of AI until it is more understood.
                        There's really no AI involved here. Although I'm sure the popular press would have you believe otherwise.

                        Comment


                        • #13
                          Originally posted by flashcrash View Post

                          There's really no AI involved here. Although I'm sure the popular press would have you believe otherwise.
                          I suppose I based that assumption on this sentence:

                          The system also allows the plane to speak to air traffic control over the radio with a synthetic voice created through the use of artificial intelligence.
                          Admittedly, that is a vague statement, but, as I said, the FAA should get ahead of AI, because it is coming, sooner than we might imagine.

                          Comment

                          Working...
                          X