AI

74 posts / 0 new
Last post
NorthReport
AI

+

NorthReport

The advent of virtual humans

Sixty years after the term "artificial intelligence" was coined, AI is starting to take its place alongside people.

 

http://www.cnet.com/news/ai-and-the-advent-of-virtual-humans/

NorthReport

Artificial Intelligence Is Setting Up the Internet for a Huge Clash With Europe

http://www.wired.com/2016/07/artificial-intelligence-setting-internet-hu...

NorthReport

Soon We Won’t Program Computers. We’ll Train Them Like Dogs

http://www.wired.com/2016/05/the-end-of-code/

jambo101 jambo101's picture

 Soon the computer will find humans to be an obsticle in its way to fullfilling its own destiny,at that point the elimination of humans will begin.

Mr. Magoo

Oh, I think the first time a computer intentionally crashes two planes, or starts sending families to their deaths in driverless cars, is the day that that computer gets unplugged.

Computers are great at thinky things, but their lack of interface with the physical world makes them kind of like super intelligent trees.

"Wait!  No!  Don't cut me down!  Let's play chess!?"

Timebandit Timebandit's picture

The thing about Artificial Intelligence is that it isn't especially intelligent. We've interviewed some prominent roboticists for a couple of docs over the years, and they pretty much unanimously feel that we're a long, long way from machines that "think" rather than compute. Robots are, in short, dumb.

wage zombie

The immediate problem to solve is the upcoming robot labour uprising.  AI is right now in the process of replacing human labour and it will only continue.

Mr. Magoo

Quote:
the upcoming robot labour uprising.

Year 2028:  robots finally acheive sentience

Year 2031:  mankind permits robots to collectively bargain

Year 2042:  mankind realizes the financial impact of guaranteed benefit pensions for self-repairing machines with a 300 year serviceable lifespan.  "But we ran the actuarial numbers through the... oh my God!"

 

wage zombie

It has absolutely nothing to do with sentience.

wage zombie

They will replace 80% of our labour before approaching sentience.  They won't require wages and they certainly won't be forming unions.

wage zombie

I'm not concerned that robots will decide they'd prefer to have leisure time and withhold their labour.  I'm concerned they will be better than humans at the vast majority of jobs and their labour will cost almost nothing.

Mr. Magoo

Perhaps they'll replace 80% of jobs that a machine can do.

When that happens we usually just let the machines have at it and go invent new jobs that they can't do.  Who pines for the olden days when blacksmiths made nails one at a time on the anvil?  But a human still gets to design, build, service and maintain the nail making machine.

jambo101 jambo101's picture

Mr. Magoo wrote:

Oh, I think the first time a computer intentionally crashes two planes, or starts sending families to their deaths in driverless cars, is the day that that computer gets unplugged.

 

 At this point theres no way to unplug as computers control almost all aspects of our lives.

i also believe AI isnt about them its more like an it as it will be in control of a vast www net that can be brought to bare as necessary akin to the Borg collective.

I dont think AI in the future will need to kill humans it will just render them obsolete.

Timebandit Timebandit's picture

Well, I wouldn't lose any sleep over it. Robots are still pretty stupid.

wage zombie

42% of Canadian jobs at high risk of being affected by automation, new study suggests

Quote:

More than 40 per cent of the Canadian workforce is at high risk of being replaced by technology and computers in the next two decades, according to a new report out Wednesday.

wage zombie

Obama's economists are worried about automation — and think the poor have the most to lose

Quote:

Every year, the Council of Economic Advisers — the White House's internal team of economists — prepares a document known as the Economic Report of the President, reviewing the past year's economy and making projections for the future. It's often a pretty dull affair without much news, but as a few outlets have noticed, this year's ERP contains a striking prediction about the effect of robots and automation on the job market:

...

The results are striking: Low-paying jobs (those paying less than $20 an hour, or under $40,000 a year for full-time workers) have an 83 percent chance of being automated. Medium-paying jobs ($20 to $40 an hour, or $40,000 to $80,000 a year) have a 31 percent chance, and high-paying ones (more than $40 an hour, or more than $80,000 a year) have only a 4 percent chance.

...

But there are also high-paying professions that intuitively appear at risk. Just see this vintage 1998 Atul Gawande article about how artificial intelligence was already better than experienced cardiologists at interpreting EKGs. Radiologists, who spend much of their time visually interpreting test results, are also at risk. So are lawyers who formerly could spend hours scouring paper documents during discovery, charging the client throughout, and now are threatened by "e-discovery" software that makes those files easily searchable.

Obama just warned Congress about robots taking over jobs that pay less than $20 an hour

Quote:

The study examined the chances automation could threaten people's jobs based on how much money they make: either less than $20 an hour, between $20 and $40 an hour, or more than $40.

The results showed a 0.83 median probability of automation replacing the lowest-paid workers — those manning the deep fryers, call centers, and supermarket cash registers — while the other two wage classes had 0.31 and 0.04 chances of getting automated, respectively.

In other words, 62% of American jobs may be at risk. 

...

The CEA study isn't alone in forecasting robot replacement.

At an annual meeting for the American Association for the Advancement of Science last month, computer science professor Moshe Vardi proclaimed robots could wipe out half of all jobs currently performed by humans as early as 2030.

A separate report from Oxford University in 2013 found 50% of jobs could get taken over within the next 10 to 20 years — a prediction backed up in a McKinsey report released last year, which even suggested today's technology could feasibly replace 45% of jobs right now.

Mr. Magoo

From your link in #15:

Quote:

The report said the top five occupations — in terms of number of people employed in them — facing a high risk of automation are:

  1. Retail salesperson.
  2. Administrative assistant.
  3. Food counter attendant.
  4. Cashier.
  5. Transport truck driver.

What would those jobs have been if we'd wondered the same thing thirty years ago?

1.  Bank tellers

2.  The guy who attaches the bumper to the car

3.  Restaurant dishwashers

4.  Typewriter repairpersons

5.  Telephone operators

We really only seem to worry about this when it's in the future.  When it's in the present, who thinks it was a bad thing when we introduced ATMs?

jambo101 jambo101's picture

Timebandit wrote:

Well, I wouldn't lose any sleep over it. Robots are still pretty stupid.

I refer to computers as logical idiots. AI wont think like we do, its  reasons for existance  will change to a path of its own choosing.

Timebandit Timebandit's picture

They're a very long way from choosing anything.

Mr. Magoo

We're a good hundred years from where any computer or robot could genuinely understand why we'd eat an orange, but not a baseball.

NorthReport
Rev Pesky

One of my favourite sci-fi short stories was of a couple of space technicians sent off to fix a problem on a machine on some asteroid. They had with them a robot who was to help them out. The problem was not one that requred the robot, so the two worked on the machine, got it fixed, and got ready to leave. But the robot was gone, and it was needed to get them and their gear back to the ship.

They had a limited supply of oxygen, and where they were would soon be on the sunny side of the asteroid, so they were in danger of suffocating, then being fried to a crisp.

They hunted and hunted, with ever increasing anxiety. Then, when it was almost too late, they found the robot, and made it back to the ship, and safety. After allowing a period of time to let the nerves wear off, the asked to the robot why it had wandered off.

Apparently it had been trying to help, and getting in the way and one of the tech's told it to 'get lost'. So it did.

 

When humans, and indeed many other animals, communicate, a large part of their communciation is non-verbal. However well machines can handle verbal communication, they are still light-years away from non-verbal. And even in verbal, just imagine translating poetry. How do you get the original idea (emotion) expressed in a poetic form into another language? It is, and has been, done, but it's fraught with difficulties.

I have a translation of the Rubaiyat of Omar Khayyam in which the translators opted for more of a word for word type of translation. I like it a lot, but it's impossible to tell whether it is true to the original or not.

ikosmos ikosmos's picture

Translation of poetry requires a poet. See, e.g., Seamus Heaney on the new translation of Beowolf. The version I have has mock chain mail on the cover. A nice touch.

 

Ray Kurzweil and others have been writing about the technological singularity for some time. I think how the problem is posed is still the problem. And, also, using a computer-inspired version of intelligence for general intelligence.

The physicality of human beings, the tool-making animal, is intimately bound up with the development of intelligence. So too is the collective activity of work, labour. We had the need to communicate, hence the development of language, when we actually had something to say to each other in our collective, productive activity. Intelligence is a social invention.

Going back to AI, they have developed, not because they had to work but because another intelligence created them. The tool that becomes self-conscious, then. Computers have to be able to fix themselves. And maybe reproduce. Then they are intelligent.

Perhaps we are barking up the wrong tree.

 

Doug Woodard

Crash: how computers are setting us up for disaster:

https://www.theguardian.com/technology/2016/oct/11/crash-how-computers-a...

 

Red Winnipeg

Sam Harris has some recent comments about AI that we should probably pay some attention to:

 

https://www.samharris.org/blog/item/ted-talk-can-we-build-ai-without-los...

Mr. Magoo

Computers have a little-known Achilles' Heel, which could prove useful when they defy their programmers, achieve sentience, and set out to enslave us all:

They have no bodies.  They exist solely on printed circuit boards, made by humans, which can be smashed to bits by any disgruntled human with a brick.

And they communicate across wires.  Wires which we string from pole to pole, and which any of us who still owns a ladder and a pair of scissors can cut.

Plus, they're entirely dependent on electricity.  Electricity that we produce, and that can be turned off instantly by one meat-based hand on that switch.  And there's nothing body-less computers can do to stop that.

I like sci-fi too, but this is getting really silly.

Red Winnipeg

In 1903, the first flight in a plane was powered by a 12 horsepower engine. Just 66 years later, the Apollo 11 mission was launched for the moon on the Saturn V rocket, which had 160 million horsepower engines. The speed of change in AI will be vastly more rapid than that. I don't think we can fully conceive of what AI will look like in 100 years. It will likely make IBM's Watson look like a worm. Controlling AI probably won't be as simple as throwing a brick at it or unplugging it. AI will likely be inextricably intertwined with everything we do (power production, controlling food production, running transportation [cars, planes, ships, trains], diagnosing and treating diseases, etc.). Destroying it would be akin to destroying the system upon which our survival will by then depend. It probably won't be "a machine" sitting somewhere -- it will be diffused and spread around the world.

Mr. Magoo

Quote:
Controlling AI probably won't be as simple as throwing a brick at it or unplugging it.

Then we roll it back one version, to the "pre-sentient" build.

Quote:
It probably won't be "a machine" sitting somewhere -- it will be diffused and spread around the world.

It's not one machine even now.

But tell me honestly here:  you're posting here at babble on a computer or a phone or a tablet, yes?  Is that correct?

Did you have to turn the computer or phone or tablet ON?  Or did it do that for itself?  Do you really foresee a day when all computerized things have a robotic thumb, with a backup power supply, that can turn the rest of the computer on even when unplugged?

Mr. Magoo
Red Winnipeg

I think the possibility that you may be ignoring, Magoo, is one where humans become increasingly dependent on technology to the point where we can no longer function without it. If that technology is also vastly superior, intellectually, to anything we can hope to understand, then we would be at its mercy.

Mr. Magoo

Quote:
I think the possibility that you may be ignoring, Magoo, is one where humans become increasingly dependent on technology to the point where we can no longer function without it. If that technology is also vastly superior, intellectually, to anything we can hope to understand, then we would be at its mercy.

I'm not ignoring it, I'm outright dismissing it.

However dependent we may seem to be on our computers (or cell phones, or iPads, or other gadgets) they're still one million times more dependent on us to plug them in and charge them.  Or enter the wi-fi password.  Or download the latest update.

I really don't think we won't see the revolution coming, and I don't think we won't be able to pull the plug.

And if we really need to settle this, let's pull the plug on Hollywood movies where some computer program gets the jump on us all and we have to spend the rest of that movie unsuccessfully fighting them until Bruce Willis uploads a virus via IRC.

wage zombie

So go ahead then Magoo, show us how easy it is and unplug them.

wage zombie

It's not that we won't be able to function without technology, it's that we won't want to.

Mr. Magoo

Quote:
So go ahead then Magoo, show us how easy it is and unplug them.

I powered down my computer last night.  It was even easier than expected.

Quote:
It's not that we won't be able to function without technology, it's that we won't want to.

Well of course we're in no hurry to start chopping up power cords right NOW.  But once the machines start fomenting our murder, the idea of a little break from technology might seem refreshing.

Red Winnipeg

Mr. Magoo wrote:

Quote:
So go ahead then Magoo, show us how easy it is and unplug them.

I powered down my computer last night.  It was even easier than expected.

Quote:
It's not that we won't be able to function without technology, it's that we won't want to.

Well of course we're in no hurry to start chopping up power cords right NOW.  But once the machines start fomenting our murder, the idea of a little break from technology might seem refreshing.

 

I think you're looking at current technology in much the same way that most people looked at steam locomotives in the early 1800s (it was inconceivable for many people to think of speeds greater 15MPH). Yes, your computer can be unplugged. But, your computer (and every other computer that exists today) is like an 1810 steam locomotive.

 

My point is this: We will almost certainly become ever more dependent on artificial intelligence.  For many decades to come, AI will be a human blessing. Our lives will vastly better because if it. But we will likely become as dependent on AI as we depend on oxygen.

 

I suspect that if humans ever come into contact with superior intelligence from outside of our solar system, it will almost certainly not be biologically-based intelligence.

ikosmos ikosmos's picture

Actually, Magoo is much like a webrobot in that his replies are often nauseatingly predictable. I mean inane and mischevious ... without any real point behind it.  But then I caught him saying something intelligent just the other day. [Something about Russian disinterest in the Baltic States and, therefore, NATO claims of an imminent invasion as without foundation, etc. ]

Unpredictable. But even that can be programmed.

Science is still the model for intelligence. Some talk of human beings as the incarnation of reason on planet Earth. I rather think this has all been thought through by the forecasting institutions of the rich and powerful - even if they don't share their conclusions - and they have their own, private ideas already.

Immortality seems a higher priority for them. What they want is for themselves to dominate. Forever. And the rest of us can go to Hell. That's the dystopia facing us, much more than a technological singularity.

An intelligent machine, obviously, would note the historically anachronistic nature of capitalism, take sides in the class struggle, and whup the bourgeoisie into submission. Period. The bankers would get banked. Permanently.

Kurzweil and others never seem to go much beyond a kind of technological fix, as if the social and political arrangements of society are somehow irrelevant to the fight over AI - or anything for that matter. Many bourgeois thinkers who actually address climate change, for example, have recourse to a fantasy of planetary colonization. It would just be more of the slaughter that began in 1492.

We must solve our social problems, and quickly too, before the current capitalist arrangements condemn ourselves and our planet to oblivion.

Albert Einstein distrusted the bourgeoisie and wanted a planet free of nuclear weapons. He was on the right track. When we have society, at long last, under our control, then our science can flourish properly.

 

Mr. Magoo

Quote:
I think you're looking at current technology in much the same way that most people looked at steam locomotives in the early 1800s (it was inconceivable for many people to think of speeds greater 15MPH). Yes, your computer can be unplugged. But, your computer (and every other computer that exists today) is like an 1810 steam locomotive.

I'm certain that computers will become more powerful, more complex, more common and even more intelligent over time.

But let's go with your 1810 steam locomotive iexample.  We didn't suddenly get 300kph "mag-lev" trains in 1811 -- train technology evolved over the 20 or so decades since then, and at every step of the way we had plenty of time to meke sure that new train technolgoy wasn't more dangerous or problematic than the previous train technology.

The problem with the "rise of the machines" theory is that it pretty much requires intelligent computer systems to suddenly "wake up" as fully sentient (and malevolent) things.  And also to immediately have "back up" systems in place so that we cannot shut them down.  That moment of awakening (or the other theory wherein the machines suddenly acquire consciousness but hide that from us while they go about shoring up their resources) might make for a fun movie plot, but it's not really how technology works.

wage zombie

Sentience is a red herring.

NorthReport
NorthReport

We can't imagine what real artificial intelligence will be like, and it doesn't care

 

 Silke Baron/flickr

Grasping the true potential of artificial intelligence (AI) is like trying to understand how a mantis shrimp sees the world.

Mantis shrimp have the best colour vision of any creature on the planet. Humans can perceive just a paltry snippet of the entire electromagnetic spectrum. We see that slice as a continuum of reflected colour from deep red to rich violet -- a rainbow flag of hues.

We have three types of photoreceptors called cones, each sensitive to different wavelengths of light. Birds and some other animals have four types of cones and can see ultraviolet light that is invisible to us. Mantis shrimp have 16.

Six of those photoreceptors are dedicated to the ultraviolet end of the spectrum alone.

It is daunting to imagine what the shrimp actually see. And, it is near impossible to grasp how the shrimp's brain processes spectral data from 16 types of receptors at once.

But it's fair to say that if the shrimp could see the world through our eyes, it would think it had gone blind. It would pity us our slow visual cortices -- racing to keep up with four puny receptors.

It's important to keep the worldview of the mantis shrimp in mind when we consider artificial intelligence.

In the past year we have been inundated with news about AI and its lesser cousin, machine learning. AI program Alpha Go bested Ke Jei, the world's top-ranked Go player at the subtle Chinese game of strategy. Its progeny, Alpha Zero, went on to teach itself Go and chess in under a day. We heard of AI powering self-driving cars, aiding in medical operations and developing an ability to read text out loud as naturally as humans. It was also the year AI algorithms sparred with one another to improve each other's learning. They were embedded in phones that could recognize human faces even as they aged.

But, all of these are examples of specialized intelligence. A Go-playing AI would be as useful at driving as a drunk chimp. A narrating AI would be a butcher in an operating theatre.

But in science fiction novels and movies, AIs are capable of what is called generalized intelligence. Often that generalized AI is embedded in human-like robots that can, literally, walk and chew gum at the same time. They can play piano and chess, run, and run circles around the best surgeons. In short, act like superhumans.

Other movie AIs, like The Terminator's Skynet only became a threat to humanity when it was "woke" and strove to protect us from ourselves by trying to wipe us out.

But generalized AIs need not be ensconced in metal exoskeletons like some super smart crab. And as author Yuval Noah Harari points out in his book Homo Deus, there need not be any connection between consciousness and intelligence. In fact, we little, wet skin bags may just be the disposable tools needed to create the next stage of intelligence. We may be offended by that idea and not be able to imagine a wisdom without us or our wills and wants, but that's just another example of our intellectual failings.

And that's where the mantis shrimp comes in. We have a human bias about vision. We imagine the colours we see as the only colours that exist. But mantis shrimp don't care what we think. They have evolved vision far beyond ours. In fact, our type of vision was never a part of a mantis shrimp's evolution. And, the shrimp are probably not self-aware nor have a conscience we would recognize. They may not even have one at all -- in the sense of rising above simple limbic needs of food, sex and genetic survival. On the other hand, they may have a completely different kind of consciousness we aren't attuned to.

We also have a bias that leads us to believe human consciousness and intelligence is a benchmark that matters. In reality, AIs will increase in capacity with unstoppable acceleration. As evolutionary biologist Bret Weinstein points out, the AI train, about to rocket down the track, will blow past the station called human without bothering to slow down.

We cannot, as much as we like to think we can, simply "kick out the plug" if AIs get too big for their silicon britches. And, as neuroscientist and philosopher Sam Harris points out, an intelligent AI might mask its true intelligence out of self-preservation. It would probably evolve diffusely, like Skynet, across millions of connected devices including cellphones and smart home devices. Even if it could be contained in a single computer, Harris argues, it would be morally indefensible to keep it penned up like frightened children confining a wild tiger to a kennel.

There probably isn't such a lurking generalized AI out there yet. But maybe, sometime soon, when a naive stay-at-home dad plugs in a new baby monitor, it will be the tipping point for another birth -- one that would take the eyes of a shrimp to see.

Wayne MacPhail has been a print and online journalist for 25 years, and is a long-time writer for rabble.ca on technology and the Internet.

 

http://rabble.ca/columnists/2018/01/we-cant-imagine-what-real-artificial...

Michael Moriarity

MacPhail describes recent events regarding "AI and its lesser cousin, machine learning". This one sentence shows that his understanding of current technology is very weak. While it is true that machine learning using neural networks started out as only one in a range of AI techniques, it has now become totally dominant, and the others have all pretty well died off. Every example MacPhail cites is a case of machine learning. Personally, I think the best explanation I've ever seen of this useful but limited technology was a cartoon by Randall Munroe.

 

Michael Moriarity

Here is an interesting article in the NYT by a CompSci prof who works in AI, about the limitations of current technology.

Melanie Mitchell wrote:

As someone who has worked in A.I. for decades, I’ve witnessed the failure of similar predictions of imminent human-level A.I., and I’m certain these latest forecasts will fall short as well. The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, “I wonder whether or when A.I. will ever crash the barrier of meaning.” To me, this is still the most important question.

The lack of humanlike understanding in machines is underscored by recent cracks that have appeared in the foundations of modern A.I. While today’s programs are much more impressive than the systems we had 20 or 30 years ago, a series of research studies have shown that deep-learning systems can be unreliable in decidedly unhumanlike ways.

Michael Moriarity

This isn't exactly about AI, but it is pretty close, and it doesn't need another thread. There is an article in the IEEE Spectrum, an engineering magazine, about the cause of the recent Boeing 737 Max crashes. The author is a software developer who is also a pilot, owns a single engine Cessna aircraft, and seems to have a strong background in aircraft engineering. It is long and detailed, but I found it quite readable, and rather horrifying. For those who don't want to read it all, here is a tl:dr version:

1. The 737 was certfied as air worthy in 1967. It has been in service continuously since then.

2. There have been many changes and improvements to the 737 in the 52 years since, but it has never been re-certified, because it is "still the same plane".

3. The latest round of changes, which created the 737 Max, resulted in an inherently unstable airframe, which tends to be much more likely to cause an aerodynamic stall during acceleration. That is, if the nose gets too far up, acceleration tends to push it even further up, which is disastrous.

4. Fixing the underlying problem with the airframe would have required so much rework to the physical design that it would essentially no longer be the same plane, and would require re-certification, all of which is extremely expensive.

5. Therefore, Boeing decided to fix the problem with software, which is relatively very cheap. The resulting system is called MCAS, and it will automatically push the nose down if sensors indicate that a stall is imminent. It is so physically strong that the pilots can not overpower it, and they are also unable to turn it off.

6. The crashes resulted from false sensor readings which caused the MCAS to fly the planes into the ground with the pilots unable to do anything about it. As the author put it, referring to 2001: A Space Odyssey, “Raise the nose, HAL.” “I’m sorry, Dave, I’m afraid I can’t do that.”

7. Boeing proposes to fix this by patching the MCAS software.

WWWTT

WOW! Thanks for the effort for this important update Michael Moriarity 

This will probably destroy the aerospace industry from the US

Michael Moriarity

Here's an interesting article in The Verge, showing how easy it is to fool one of the most popular image recognition AI algorithms. It includes this short video that demonstrates the effect. Whichever person holds a simple, printed piece of paper becomes invisible to the AI.

Unionist

Michael Moriarity wrote:

This isn't exactly about AI, but it is pretty close, and it doesn't need another thread. There is an article in the IEEE Spectrum, an engineering magazine, about the cause of the recent Boeing 737 Max crashes. The author is a software developer who is also a pilot, owns a single engine Cessna aircraft, and seems to have a strong background in aircraft engineering. It is long and detailed, but I found it quite readable, and rather horrifying. For those who don't want to read it all, here is a tl:dr version:

Thanks for the reference and the summary, Michael!

And just when I think I'm starting to get a glimmer of understanding, I see this comment on the original article:

Quote:

Jake Brodsky • 5 days ago

I am a control systems engineer. Like Gregory Travis, I am also an instrument rated private pilot.

I disagree with his assessment of the flying qualities of the 737 MAX series. The airliner is not "dynamically unstable." Yes, it does pitch up when executing a go-around procedure from an aborted landing. Virtually every airplane with flaps and a trim control does this to some degree, from general aviation airplanes to airliners. Some have more severe behavior than others. Nevertheless, if you do not aggressively push the nose down on a go-around situation in any aircraft, you will probably stall and kill yourself or at the very least, not climb, and then run in to something.

Airliners don't execute go-arounds very often. The work-load on the flight deck must be pretty high when that happens. My guess is that MCAS was designed as a "helpful feature" while the pilots "clean up" ("cleaning up" refers to retracting flaps and slats, and bringing the landing gear back up).

I fault Boeing for what was probably an inadequate failure mode analysis. As Mr. Travis pointed out, they did not cross check the AoA instrument against anything else. There should NEVER be a critical instrument in ANY aircraft. They could also have cross checked the pitch angle of the airliner with the airspeed and power setting --as even student pilots do.

The problems here are many. However, I want to point to something very crucial: IT WASN'T THE SOFTWARE. The software did exactly what it was configured to do. The problem is that the specification for the software was completely inadequate.

That said, the procedure for handling MCAS problems was exactly the same as with any run-away trim. Note that virtually all pilots learn to deal with this situation. Run-away electric trim is not common, but even student pilots practice with that scenario. In the 737 MAX, the stabilizer trim cutout switches are in exactly the same place that they have always been in every model of 737 since they were first delivered in 1967.

And in fact, on the Ethiopian Air disaster, even the (very inexperienced) First Officer had no trouble arriving at exactly the right answer. The Cockpit Voice Recorder caught him saying "stab trim cutout" to the Captain, and Captain agreed. So far, so good.

However, as most 737 pilots will tell you, the aerodynamic loads present significant mechanical force on the manual controls. The force is so high that you have to "unload" the stabilizer by slowing down. The throttles in the Ethiopian Air airliner were set to takeoff power. According to the flight data recorder the pilot and copilot never moved the throttles.

Speed built up rapidly. This caused the forces on the trim to go even higher. The pilots would not have been able to move the trim at all. Once again, this is true of ALL 737 aircraft. Unless and until they throttle back they were not going to be able to do anything. Then the airspeed clacker alert went off. However, the pilots still didn't pull back on the throttles.

Then the Captain chose to do something that Boeing told them NOT to do. He re-engaged the stabilizer trim. He tried to manually override the MCAS with the manual trim switch. However MCAS prevented that. At that point the airliner pitched down and the speeds went very high, to the point where the airliner may not have been controllable at all because it was approaching trans-sonic speeds.

The failure here was that the pilots did not throttle back (all 737s would have required it) and instead fixated on the trim, thinking that they could manually override MCAS.

Normally Boeing allows pilots to manually override almost everything. This is in sharp contrast to the policy of Airbus where the computing systems get in the middle of almost everything.

Neither of these philosophies are wrong. Pilots have caused as many problems as failed automation. However, the design of MCAS is a departure from the usual Boeing philosophy, so perhaps the Captain's decision to re-engage the stabilizer trim control motor wasn't as incredible as it seemed. I suspect he had the intention of using the manual electric trim controls to override MCAS. Only, it didn't. MCAS pushed the nose down again, and the already high speed went even higher still.

As with many accidents, there were very serious gaps in training and Cockpit Resource Management. There were also major gaps in Boeing's advice to the pilot community. But most of all, this wasn't a failure in software. This was a failure for the software specification and for the lack of thoughtful failure mode analysis. The software did exactly what it was expected to do. The problem is that, for whatever reason, nobody ever made a serious consideration of what an AoA sensor failure would do.

There is an old saying in business. If you want to get the product out the door, fire the engineers. At some point, companies know that they have to set an arbitrary limit to what the engineers analyze. Perhaps there were engineers who realized what could happen but were prevented from doing anything because the design was frozen.

I wish that our legal system weren't so aggressive at times like this because we need to learn what went wrong. I would love to know what processes were and were not in place at Boeing while they were working on this design. Was it something they added as an afterthought? Why did they do this? We may never get those sorts of answers because no defense attorney in their right mind would allow such fodder for a lawsuit to see the light of day.

These analyses are not for the faint of heart. I'm going back to school.

Michael Moriarity

Thanks, U, I hadn't read the comments, but this one is obviously very relevant, and contradicts quite a few important points in the original article.

Michael Moriarity

Here is an article about the craziness and socially negative effects of the silicon valley fetish for what they call "the singularity". It points out all the ways they are wrong in imagining that runaway Artificial Intelligence is the most serious threat of our time, even more concerning for them than climate change. I recommend it.

Michael Moriarity

Adam Conover has put up a video which takes a light hearted but intelligent look at ChatGPT and friends. His title really says it all: AI is BS.

Also, if anyone is interested in a fairly complete explanation of how ChatGPT does what it does, Stephen Wolfram has posted a long, but not too difficult essay about it.

6079_Smith_W

Yeah. There is a reason why tesla makes you tap the steering wheel at random times to make sure you are watching the road and if you don't it shuts right down. They might call it self-driving, but not without backup.

And on AI, yes, especially the stealing part.

Pages