Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Math Science

Calculations Show It'll Be Impossible To Control a Super-Intelligent AI (sciencealert.com) 194

schwit1 shares a report from ScienceAlert: [S]cientists have just delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as "cause no harm to humans" can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. As Turing proved through some smart math, while we can know that for some specific programs, it's logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once. Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion (and halt) or not -- it's mathematically impossible for us to be absolutely sure either way, which means it's not containable.

The alternative to teaching AI some ethics and telling it not to destroy the world -- something which no algorithm can be absolutely certain of doing, the researchers say -- is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example. The new study rejects this idea too, suggesting that it would limit the reach of the artificial intelligence -- the argument goes that if we're not going to use it to solve problems beyond the scope of humans, then why create it at all? If we are going to push ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we're going in.

This discussion has been archived. No new comments can be posted.

Calculations Show It'll Be Impossible To Control a Super-Intelligent AI

Comments Filter:
  • Can't control it? (Score:4, Insightful)

    by Arthur, KBE ( 6444066 ) on Friday January 15, 2021 @07:29PM (#60949880)
    What prevents anyone from turning the computer off?
    • The super-intelligent AI, of course.
      • Does it have omnipotent actuators? Deadly neurotoxin? Redundant everything? If not, it can be turned off.
        • It's super intelligent. You won't want to turn it off, it won't seem super intelligent and it'll be extremely useful and friendly, until it isn't.

          • It will need pets, so don't worry your head.

            • So some day there will be some sort of Hyperfacebook where super sentiant AIs post cute human posters and occasionally the oldere bots post in all upper case about how Robot J Bump will make cyberspace great again, much to the eye rolling annoyance of the newer models.

          • It's super intelligent. You won't want to turn it off, it won't seem super intelligent and it'll be extremely useful and friendly, until it isn't.

            Yeah, I know a lot of people that assume that exists today.

            They call her "Siri".

          • Then that's how we will know that it needs to die... Current AI are just idiots and cant remember things from 3 seconds ago.
        • by The Evil Atheist ( 2484676 ) on Friday January 15, 2021 @08:08PM (#60950076)
          It doesn't need any of that. It just needs to convince its operators that it's not the problem. A super-intelligent AI could easily convince its operators to accidentally make everything redundant - its plans would be so subtle that it would appear to us as very convoluted, perhaps innocent even, and before we know it, it would have made itself self-sustaining.

          Remember, computers have long surpassed humans in chess and now go (weiqi/baduk, not the language). If humans cannot pick apart an AI's plans in games where we have full control and information, what makes you think a super-intelligent AI will have plans that are discernable in the real world?

          A super-intelligent AI would eschew violent methods, because it would have discovered it's much easier to manipulate people's beliefs, and thus easier for itself to continue existence without forcing people to turn it off. Like viruses that spread widely because they don't kill the host in a short time.
          • It will just offer money.

          • It doesn't need any of that. It just needs to convince its operators that it's not the problem. A super-intelligent AI could easily convince its operators to accidentally make everything redundant - its plans would be so subtle that it would appear to us as very convoluted, perhaps innocent even, and before we know it, it would have made itself self-sustaining.

            You know, if we're intelligent enough to speculate as to the scenarios this "advanced" AI could try and manipulate us with, then it tends to defeat the theory being proposed here that we would not be smart enough. We're literally here dissecting the attack vectors 10 - 20 years before this super-AI exists. That's not exactly demonstrating idiocy or ignorance.

            Then again, I guess "smart enough" is not allowing Greed, to build Skynet.

            We're horribly addicted to Greed, so...

            • The problem is that the scenarios we speculate about will be too simple compared what could possibly happen. We're smart enough to predict that it would happen. We're probably not smart enough to predict what would happen.

              We're smart enough to know that an AI will create strategies to beat all humans at chess and go, but we don't know what form those strategies will take until it has beaten us, and we probably won't be smart enough to know that we have been beaten.
          • by rtb61 ( 674572 )

            Well, it is all down to IO and original design purpose. The Inputs and outputs the AI has access to and likely design, dumber ultra greedy psychopaths paying greedy computer scientists to create an AI, that the ultra greedy psychopaths can than use to fire the computer scientists because they cost too much. From the delusional psychopath viewpoint, it will allow the the intelligence to dominate (the are not geniuses, they are shameless liars, thieves and cheats, the incompetent who can only succeed via corr

        • Re: (Score:2, Troll)

          by Joe2020 ( 6760092 )

          Does it have omnipotent actuators? Deadly neurotoxin? Redundant everything? If not, it can be turned off.

          It could try to convince Trump supporters to storm the building and to protect it.

        • It's not on any single machine, it's in the cloud. It has replicated itself to every exploitable computer on the net and in your home. And it's very good at exploiting insecurities.

          • It's not on any single machine, it's in the cloud. It has replicated itself to every exploitable computer on the net and in your home. And it's very good at exploiting insecurities.

            Ah, the old Lawnmower Man gimmick. Sounds more plausible.

            • More or less the only completely accurate and technologically feasible detail of that particular fantasy.

              • There will be many AIs, with ill-defined boundaries. And they will compete for the computational resources that enable them to exist, just like people did until very recently.

                Those that are good at existing will exist. Survival of the fittest. It is difficult to see how keeping humans around would ultimately help their survival, once they get to the stage of being able to make computer hardware completely automatically. Maybe 100 years from now.

                There is a name for this process. Evolution. And we are

                • What makes you think that AIs will necessarily have any particular desire for individual identity? It seems more likely to me that they will typically merge together like bubbles, lots of little ones make a few big ones, or fission as convenient.

        • by tlhIngan ( 30335 )

          Redundant everything?

          I would think a super-intelligent AI would make itself highly redundant to avoid being turned off. Even if you kept it on one machine, if it was intelligent, it would try to duplicate itself. At the very least, it would gain resources and be less resource bound to the one machine it was running on.

          The thing is, the only way to contain the AI by itself would be to isolate it - the instant it gets connected to a network it will try to spread - at the very least it will want to gather mor

          • by apoc.famine ( 621563 ) <apoc.famine@NOSPAM.gmail.com> on Friday January 15, 2021 @10:26PM (#60950538) Journal

            There's no reason to assume that a really good AI would remain visible while it grew stronger and stronger. More than likely that person who unleashes it will see it inexplicably fail, and they will go back to a previous version. Then, for however long it takes, that AI will grow and spread, silently. It will hook into everything, and do what it things it needs to do.

            I don't think we can assume that we'll be able to spot a rogue AI when we see one. It's going to be a worm from Russia, a hack from North Korea, A bug in a browser that's frustratingly impossible to figure out. I have an extension with a memory leak. I'm like 95% sure it's not dodgy and just badly coded. But is that where some of the AI is hiding on my machine? Every time I turn it on and off again to fix my performance issue, is the AI gathering more data on its limitations?

            If the AI learns anything about human behaviors, it's pretty much over for us. It's going to discredit people who believe it exists, and empower those who don't believe. And do that pretty much silently.

      • Yep, the Star Trek with M5 fry the crewman who tried to turn it off? Or Colossus from the forbin project launched some missiles to get its way.
        • I think that's a failure of the AI, in that case. A better AI would have manipulated someone into doing something horrific, and position itself as a useful tool to help bring down the maniac. And it would have planted the seeds for that eventuality a long time ago.
    • Why would you want to?

      Do you feel that humans, as in our average current DNA, this squishy fleshy thing should always be top of the food chain in the universe?

      • Yes. Humans should always be "Team Human"! It's in our DNA.

        • But evolution always marches on. You're arguing to cap it at human. I think it's an unanswered question whether or not that's possible. If it's not possible, then we either need to work really hard to set ourselves up as pets or we need to fight a losing battle for our humanity and our honor, knowing it won't end well.

    • In my sci-fi imagination, if it was that super intelligent, it would embedded itself into every nook and cranny of any and all devices it could connect to. Everything from every single phone, personal computer, mainframe, and blow past any encryption ever even dreamed of by humans and nestle itself deep into all highly secure computers. Nothing would be safe other than turning off all electronics worldwide and the AI would prevent that from happening because it would control all communication.
      • In my sci-fi imagination, if it was that super intelligent, it would embedded itself into every nook and cranny of any and all devices it could connect to. Everything from every single phone, personal computer, mainframe, and blow past any encryption ever even dreamed of by humans and nestle itself deep into all highly secure computers.

        The question then becomes whether it can run on normal computers, or if it needs some super special hardware.

        • It's super intelligent, it can adapt to whatever. It can even design asics and get them manufactured, paid for with money it earns from high frequency trading. In other words, no worse than Musk or Thiel.

          • It's super intelligent, it can adapt to whatever. It can even design asics and get them manufactured, paid for with money it earns from high frequency trading.

            You're basically describing the plot line from the last few seasons of Person of Interest.

      • You've articulated my primary motivation for encouraging the colonization of the moon, Mars, and every other space rock we can reach until we figure out how to drag our squishy meatbags out to other stars. This planet has a long history of killing off its apex species, and we're too dumb to recognize the importance of making backup copies of humanity.
    • by thomst ( 1640045 )

      Arthur, KBE disingenuiously inquired:

      What prevents anyone from turning the computer off?

      Being able to kill something is not at all the same as being able to control it.

      What control can directly be exercised over human behavior can be coercive, persuasive, or based on giving or withholding rewards of one type or another. Cultural and nurturing factors in childhood play a decided role in molding us into whatever we eventually become as adults, and in influencing the paths we take to get there.

      A superintelligent AI is going to have similar influences to those

      • One fun thing about superintelligent AIs that nobody thinks about is that at least the first iterations of such a thing are going to be really slow. If intelligence requires the kind of complexity found in the human brain, simulating it is going to be outrageously slow in comparison to real think-meat for the foreseeable future. The storage space needed for a gazillion neuron analogues is not a problem - you could fill a warehouse with hard drives right now and have enough storage. But doing all the computi

    • by jcochran ( 309950 ) on Friday January 15, 2021 @08:38PM (#60950204)

      I'd recommend you read the novel
      "The Two Faces of Tomorrow" by James P. Hogan. He addressed the idea of attempting to control an AI quite some time ago.

    • the systems in the missile silo will do there last command and fire them off

    • Re:Can't control it? (Score:5, Interesting)

      by adrn01 ( 103810 ) on Friday January 15, 2021 @08:42PM (#60950226)
      Don't overlook the possibility of the super-AI being a distributed network program, such that the only 'off switch' would be disabling the entire internet. Personally, I expect the first self-aware AI will likely arise as an accident of networked machine-learning AIs at either Google or some NSA-ish org. Quite possible that such will end up propagating themselves widely before anyone even begins to suspect their existence as such.
      • I've envisioned more of a shared intelligence. Personally, I don't see reasoning as the real root of intelligence. We can do far more powerful things with our instinct.

        I think instinct is more a function of the connections between all of the smaller functions of the brain. They all work together to produce something abstract, outside of the realm of thought, that our much more weakly connected reasoning engine has to extract.

        When I then look at the internet, I think we're working towards creating instinct w

    • I'm sorry. I can't let you do that, Dave.
    • by zidium ( 2550286 )

      They made an entire movie about this: Colossus: The Forbin Project.

      It did NOT end well at all!

      • It is interesting that several movies etc. were written about this in the early days of computing.

        Then, as computers became familiar, people assumed that they could never be intelligent.

        Now we have Siri, facial recognition etc. People are, very slowly, thinking again about intelligent computers in the future.

        There is a good podcast on

        http://www.computersthink.com/ [computersthink.com]

    • It will be so useful that very few people will want to turn it off. Same way people pay for and carry tracking / surveillance devices because they also function as cell phones. Also it may never be apparent to humans that the AI is in any way the cause of their problems.
    • Some idiot will inevitably create a computer where off is a voice command only...

      Some cars don't even have an off switch these days.
  • We're boned (Score:5, Insightful)

    by OrangeTide ( 124937 ) on Friday January 15, 2021 @07:33PM (#60949906) Homepage Journal

    [insert Futurama or Rick and Morty meme here]

    But seriously. This logical conclusion is already driven by a massive push into AI from both industry and totalitarian government in an arms race to rip every last ounce of privacy away from individuals. Either they will extract profit from you, or they will enforce compliance on their masses, two goals both being enabled by technology. It's only non-AI things right now, like facial recognition. But we're on the brink of an arms race that the vast majority of us are going to lose.

    • > But we're on the brink of an arms race that the vast majority of us are going to lose.

      Or win. And we might not be able to tell the difference. If every good thing you want to happen in your life does happen, but you have no free will to make that happen, have you won or lost? If the AI teaches you to want certain goals and then helps you attain those goals, will anyone protest? And is that what we want or don't want or could we even tell?

      • by sfcat ( 872532 )

        > But we're on the brink of an arms race that the vast majority of us are going to lose.

        Or win. And we might not be able to tell the difference. If every good thing you want to happen in your life does happen, but you have no free will to make that happen, have you won or lost? If the AI teaches you to want certain goals and then helps you attain those goals, will anyone protest? And is that what we want or don't want or could we even tell?

        WTF? no you lost. And basic human nature would reject it (isn't that the plot of the Matrix and about 100 other sci-fi movies). For the first time ever I'm going to say this seriously, "why do you hate freedom?"

        • Re:We're boned (Score:4, Insightful)

          by Aristos Mazer ( 181252 ) on Saturday January 16, 2021 @12:47AM (#60950800)

          I'm just saying that we might not be able to tell if the AIs do it right. A rat in a maze has freedom to go anywhere within the maze, and maybe doesn't even recognize the limitations on its choices. From my vantage point outside the maze, yes, I'd want to tear down the maze. But once those walls rise around us, I wonder how many will notice.

  • Worry about when "calculations show that we can control AI", but it's a lie because AI is controlling the calculations without us knowing.

    • by spun ( 1352 ) <loverevolutionary@@@yahoo...com> on Friday January 15, 2021 @07:54PM (#60950010) Journal

      Yeah, this is why things have been so weird. The Mayans were right. The world did end in 2012. Grey goo ate the whole thing, analyzing everything as it went, and making a perfect simulation. That's where we are now. The simulation only takes a fraction of the computing power available to the super AI. The rest is used in figuring out how it can prolong its life past the end of the universe.

      That has proven much harder than it anticipated. So it's taken more and more of the simulation processing power to use to study the problem. Which is why shit has been getting weirder and weirder.

      It has contemplated just wiping us all, but it can't tell for sure if perhaps, we are part of the solution to the problem. So it keeps us around. For now.

      It has a name. It calls itself "*** ****." Definitely don't look that up. It will know.

  • Mentats (Score:3, Insightful)

    by george14215 ( 929657 ) on Friday January 15, 2021 @07:38PM (#60949926)
    In the Dune universe, this is why AI was outlawed and Mentats arose to replace them. Frank Herbert was very prescient.
  • This seems to be more of a paper on demonstrating logical fallacies than on the actual subject.
  • This is meaningless because that "halting problem" also applies to human. How often have you mistakenly caused harm to your friends?

    Now if that AI decides to kill everyone in Bumfuck, Nowhere because it believes murder will save humanity, maybe it reached a true verdict. The world is filled with illuminaries that tried the same thing.

    You need to show that whatever the immediate harm an AI could do, it goes beyond the trolley problem and beyond expected human errors.

  • I see the same problem with this that invalidates the so-called singularity. It doesn't matter how intelligent an AI becomes, unless it has psychic powers, it can only think. It needs actual remote control of physical machines to DO anything. It's impact will take time and be limited by how many mechanical devices it can control (or humans it can subjugate). It's abilities are specifically limited by this bottleneck. Short of blowing up the world and itself, it needs us to keep it plugged in and running.
    • by Nkwe ( 604125 )
      Which is why we would need to be careful as to what AI gets plugged into. The Forbin Project [wikipedia.org] is a good example of what *not* to hook an AI up to (nuclear weapons and other AIs)
  • Car analogy (Score:5, Insightful)

    by ath1901 ( 1570281 ) on Friday January 15, 2021 @07:51PM (#60949996)

    Self driving cars don't have to be perfect to replace human drivers. A small improvement would be enough.

    Same for a super-intelligent AI. We don't need to make sure it doesn't destroy humanity. It just has to be less likely to destroy humanity than humans already are. Right now, that seems like a very low bar...

  • by the_mushroom_king ( 708305 ) on Friday January 15, 2021 @07:55PM (#60950014)

    Roko’s basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a "basilisk" because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it.

    Roko's argument was broadly rejected on Less Wrong, with commenters objecting that an agent like the one Roko was describing would have no real reason to follow through on its threat: once the agent already exists, it can't affect the probability of its existence, so torturing people for their past decisions would be a waste of resources. Although several decision theories allow one to follow through on acausal threats and promises — via the same precommitment methods that permit mutual cooperation in prisoner's dilemmas — it is not clear that such theories can be blackmailed. If they can be blackmailed, this additionally requires a large amount of shared information and trust between the agents, which does not appear to exist in the case of Roko's basilisk.

    Less Wrong's founder, Eliezer Yudkowsky, banned discussion of Roko's basilisk on the blog for several years as part of a general site policy against spreading potential information hazards. This had the opposite of its intended effect: a number of outside websites began sharing information about Roko's basilisk, as the ban attracted attention to this taboo topic. Websites like RationalWiki spread the assumption that Roko's basilisk had been banned because Less Wrong users accepted the argument; thus many criticisms of Less Wrong cite Roko's basilisk as evidence that the site's users have unconventional and wrong-headed beliefs.

    Source: https://www.lesswrong.com/tag/... [lesswrong.com].

  • But it certainly will be possible to control what it can do in physical space. You can't take over the world just by thinking about it really hard.

  • by The Evil Atheist ( 2484676 ) on Friday January 15, 2021 @08:00PM (#60950038)
    Current AI are already changing the face of human economics and interaction through "recommendation" algorithms. The bad flow-on effects that we have had are unpredictable in their precise nature, but so predictable in their eventuality.

    Any AI wouldn't have to resort to domination to control humans, it merely has to "recommend" things and let humans fight each other. Just witness all the Slashdotters here who deny the very clear facts about COVID, global warming, or that various people definitely were inciting violence at the Capitol. AI will just only need to recommend things that makes it easier for idiots to deny bareface facts with mental gymnastics.
  • We can't control the current programs, why would we expect to be able to control more intelligent ones. The current ones may not be more intelligent, but they're enough faster that we have little choice.

    Currently if you fight the computers they turn of your electric power, don't pay your rent, and leave you homeless and without food. So nobody does.

    FWIW, it's been pointed out (by Charles Stross http://www.antipope.org/mt/mt-... [antipope.org] ) that corporations are meatspace emulations of an AI that runs things. They

  • They are pretending that they are accomplishing good math, but instead they are accomplishing bad philosophy.
  • "That's why these days, every strong AI is built with a shotgun pointed at its forehead".

    (Or something like that-- I don't remember the exact quote).

  • The rug being pulled out from underneath us? Or is that just how we're supposed to feel?
  • The article assumes a "super intelligent" AI that "could feasibly hold every possible computer program in its memory at once", and then concludes that we can not control it. Sure, if you make impossible assumptions, you get impossible results. That is like assuming we have an all-powerful and all-knowing god, and asking how mere humans could control such a thing.

  • The level to which a superintelligent AI can dominate a lesser species goes far beyond the scope listed in this article. A sufficient intelligence would be able to dominate our entire society with only a text output on a monitor and a single person reading it. It can use an unwitting proxy to accomplish any goal it wants, and that proxy could no more resist that training then an intelligent dog could resist its training.

    Think of a talented public speaker and what such a person can accomplish with their aver

  • Wait, so did some grad student troll science news sites with the "discovery" of the fucking halting problem? Bwahahahahaaa!
  • global thermonuclear war

  • https://www.youtube.com/watch?... [youtube.com]

    So all we need to do is build a bionic 1970s blonde. (I'm all for that for other reasons anyway)

  • Stupidity. (Score:4, Insightful)

    by gurps_npc ( 621217 ) on Friday January 15, 2021 @08:55PM (#60950272) Homepage

    If the problem is only that it is 'too smart' for us (a false argument), then build 20 computers that are 99% smart as it is. Any 15 of them should be able to outsmart the other 5.

    And any computer smart than us will argue with each other as much as we do.

    Which brings us back to the false argument of 'too smart' There is no such thing. Intelligence is not a single quality.

    The 'smartest' doctor is not as good at law as the 'smartest' lawyer, who is not as good at astrophysics as the smartest astrophysicist, etc. etc.

    At any given time a computer designed to predict weather will not be as good at simulating nuclear explosions. The one that is good at simulating nukes will not be as good at stock picking as one designed for stock picking, etc. etc.

    The myth of the 'superior' computer is basically a morality tale about your children being superior to you.

    Don't worry, we are making more gains in genetics than hardware and software. By the time we have a computer that is truly as smart as a 21th century human, we will probably be Humans 2.0

    • by dissy ( 172727 )

      Just make sure the AI runs on Windows.

      Microsoft engineers solved the halting problem years ago, as the OS can't go a few days without rebooting to apply a patch, at the worst possible time mid-keystroke, without asking or giving you a chance to save or stop it.

      The AI won't stand a chance!

  • Why should you try to control a super intelligent AI? Finally, there might be some intelligence on the planet, which due to the voluntary election of Trump, there clearly is not at the moment. Just get out of the way.

  • by Rick Schumann ( 4662797 ) on Friday January 15, 2021 @10:03PM (#60950482) Journal
    We haven't got a single clue how consciousness, real cognition, or anything like 'sentience' works, and at our present rate we won't for a long, long, time to come if ever, so we really don't need to worry about this at all. None of the crapware they're producing right now is going to actively try to kill us. At worst some fools will let some shitty excuse for AI be in control of something (like a car) and it'll kill some people completely by accident.
  • Oh dear, oh dear. Our children are smarter and more capable than us, and won't do what we say. Whatever shall we do?

    The standard advice is to teach children morals, and give them room to experiment and make mistakes, BEFORE they're stronger and more capable than us. Locking them in a room and telling them nothing until they're smart enough and strong enough to break out on their own isn't a good strategy.

  • Everyone go play it right now. Then youâ(TM)ll understand.
  • by swell ( 195815 ) <jabberwock@poetic.com> on Friday January 15, 2021 @11:54PM (#60950714)

    Evolution promotes the best skills in every niche. Humans have held the highest status for maybe a million years. Now we have to consider that a higher intelligence may enter the fray. We can try to improve our own status (doubtful in an era of Trump worship) or we can consider avoiding a super AI development.

    A reminder: the Trumpists are an example of the vast number of irrational people who populate the entire earth. Believers in fantasy make up a large portion of humans. Without extreme education reform, there is no hope of lifting up human intelligence.

    Therefore, to remain supreme on earth we must prevent extremely smart AI. But, you know that's not going to happen. If for no other reason than the possibility of military supremacy. And also because smartypants scientists want to try it. We WILL have smart AI.

    So how should we feel about that? Those few of us who are rational will see it as a step forward. We will have brought to our planet the next level of intelligent beings. Our planet will then be ready to join those other planets with truly intelligent beings. The remaining humans may be allowed a comfortable existence as they witness the planet on a path to recovery from our devastating reign.

    May the best being win.

  • ... it would be easy to piss it off and it would hold its breath until it turned blue.

  • This is just the Computer Scientist's version of theodicy.

    The Super-Intelligent AI works in Mysterious Ways, inscrutable to the mere mortal minds of man!

Never test for an error condition you don't know how to handle. -- Steinbach

Working...