Community > Forum > Technology & Science > piloted flights, automatic flights - hybrid?

piloted flights, automatic flights - hybrid?

Posted by: Ekkehard Augustin - Mon Oct 18, 2004 9:30 am
Post new topic Reply to topic
 [ 19 posts ] 
piloted flights, automatic flights - hybrid? 
Author Message
Moderator
Moderator
avatar
Joined: Thu Jun 03, 2004 11:23 am
Posts: 3745
Location: Hamburg, Germany
Post piloted flights, automatic flights - hybrid?   Posted on: Mon Oct 18, 2004 9:30 am
The Soyuz TMA 5 has been docked to the ISS manually because of to much velocity. Usually the manoever is conducted automatically.

This shows that manual steering is possible and there have been several precedent manual manoevers - the docking of the Eagle to the Apollo capsule for example.

So what about manual steering assisted only by computer? Might that increase the capabilities of both the crew and the spaceship?



Dipl.-Volkswirt (bdvb) Augustin (Political Economists)


Back to top
Profile
Moon Mission Member
Moon Mission Member
avatar
Joined: Tue Feb 10, 2004 2:56 am
Posts: 1104
Location: Georgia Tech, Atlanta, GA
Post    Posted on: Sat Oct 23, 2004 7:57 pm
Manual piloting with a computer assist. I believe Mike Melville proved that on the first spaceflight, when the computer guidance system went out right after he started ascent. He proceeded to manually fly SSO to the required altitude by simply determining his orientation from the position of the sun.

If you're on a fully automated vehicle, and the computer dies, so do you.

_________________
American Institute of Aeronautics and Astronautics
Daniel Guggenheim School of Aerospace Engineering

In Memoriam...
Apollo I - Soyuz I - Soyuz XI - STS-51L - STS-107


Back to top
Profile
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Sat Oct 23, 2004 9:23 pm
I've always said "Put in mechanical backups, because electronics fail more easily" - that might easily translate to "Always put in manual controls for when the computerized ones fail - because they will." Melvill's seat-of-pants flight makes the case in point there.

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Space Walker
Space Walker
User avatar
Joined: Wed Aug 11, 2004 2:05 pm
Posts: 173
Post    Posted on: Mon Oct 25, 2004 6:34 pm
only the stage that gets to orbit needs to be manned.

_________________
Thank you very much Mister Roboto
For helping escape when I needed most
Thank you
Thank you


Back to top
Profile
Space Walker
Space Walker
avatar
Joined: Sun Sep 28, 2003 9:58 pm
Posts: 111
Post    Posted on: Mon Oct 25, 2004 9:55 pm
On the subject of unreliable electronics... It's not about control links, it's about who's calling the shots. Indeed I would feel most uncomfortable in a fully automatized ride. Human brain is vastly superior in interpreting and acting upon partial information. Computer is perfect for cues for such rather abstract tasks as orbital injections; for docking, there's no real need. A visual docking system is probably the most cost-effective aid.

However, electric and electronic controls governed by the pilot are good. FADECs have been used in jets and turboprops for a few decades now, and with the advent of diesel engines, the same technology is being introduced to general aviation. On the other hand, there are (true) horror stories of loose objects finding their way into the links of the mechanical control system. Some lived to tell the tale, some didn't. Ironically, the only problem that caused damage on Space Ship One was of mechanical nature.


Back to top
Profile
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Tue Oct 26, 2004 10:02 pm
I won't trust a computer with my life in such a manner any more than I would trust a dog at the controls of a 747. :) Until we see true Turing-grade AI (fully human-level - or better, beyond) I won't fly in anything that's operated exclusively by computer. Ride a train, OK - though I have seen robotic trans' inability to compensate for error. Cars - ehhhh... Not sure.

My doubts may come from me being a computer science student, myself. :)

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Moderator
Moderator
avatar
Joined: Thu Jun 03, 2004 11:23 am
Posts: 3745
Location: Hamburg, Germany
Post    Posted on: Wed Oct 27, 2004 9:48 am
Turing and AI - this is going to hurt values in philosophical and psychological sense.

Never should a machine or a computer rule a man - it allways should be a "slave" or an assistant only. So it should be reacting only to a man's active and explicit command, instruction etc. - not to a sole desire for example.

Man has to have control - otherwise things will turn to be bad for man.



Dipl.-Volkswirt (bdvb) Augustin (Political Economist)


Back to top
Profile
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Wed Oct 27, 2004 12:28 pm
Several issues these days make me wonder if man is indeed fit to rule man. And at any rate, the day an AI (or posthuman) grows significantly more intelligent than humans, and desires to take power, it will do so, without having to resort to violence, and likely administrate that power better than humans were ever able to. My guess is baselines (human-level sophonts) won't have any idea who is really controlling the world.

Anyway, that's another discussion, which I dsuggest we take to the appropriate location. At any rate, I don't trust sub-turingrade AI with any further than I can throw it. ;)

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Moderator
Moderator
avatar
Joined: Thu Jun 03, 2004 11:23 am
Posts: 3745
Location: Hamburg, Germany
Post    Posted on: Wed Oct 27, 2004 12:46 pm
Turing, AI and sophonts are related to the topic of this thread and it'S justified.

There are several problems included in your answer - problems of philosophical nature.
First there is no real definition of "intelligence" yet - and perhaps never will be. And second considerations on value and worthyness are involved in each decision - and these both are extremely if not totally subjective. If a sophont or an AI doesn't have imagines of what is worth to be achieved and what is not then it's missing a relevant base for decision and I never would leave a decision to it and consider it to hurt ethical and moral principles - if in contrary has such imagines then these imagens would be subjective and for this reason again I never would leve any decisions to it.

I consider sophonts and sphont-lie AI to be hurting ethics and morals extremly much more than Gene-Technology and Stemcell Therapy.

Because of this I don't want them to decide on the goals to travel too by a spaceship and consequently I don't want them to have the superior control on the spaceship. Additionaly I don't want never to be living in a world ruled by sophonts or AI's - control, decisions, government and much more have to be left to men.



Dipl.-Volkswirt (bdvb) Augustin (Political Economist)


Back to top
Profile
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Thu Oct 28, 2004 10:00 am
Very well, if you feel a discussion of sophont AI is on-topic, I shall continue. :)

Perhaps a term as broad as 'intelligence' lacks definition, but terms like 'consciousness', 'awareness' and 'self-awareness' are quite well defined by now. And if you build a conscious, aware and self-aware computer, isn't that basically an AI?

As an aside, the term 'sophont' refers to any being of circa human-level or better intellect. There is discussion whether this includes dolphins, who seem to be intelligent enough, but simply do not use it to build civilizations. A sophont AI would be equally as intelligent as a human being, and would most likely have emotions, ideas and thoughts akin to ours - although the hardware would be mineral rather than organic. Does this make this intellect lesser? More dangerous? What rules of morals or ethics does the creation of such an intellect breach? We already 'play God', and have for several hundred years - we've just gotten better at it lately. But I do concur that we'd need to redefine some ethical rules for a new world. A world in which we would no longer be the only, or indeed the dominant, form of intelligence.

Now this may sound scary to you - but look at it this way. With various examples of power-grabbing and malicious leaders unfit to rule around the world, wouldn't it make sense to put someone a lot smarter than any of us in charge of cleaning up? Of course, a misanthropic AI would hardly be the best choice for a world leader, true, but an AI with a healthy live-and-let-live attitude might do a lot of real good.

Rules of ethics and morals are a relative thing. The rules of ethics and morals 200 years ago said that enslaving someone because his skin was dark was OK - nowadays, that's a bit different. And by banning research into an area, the only thing you get out of it is, basically, that someone else gets it first, possibly with the intent to use it against you... Science marches on, and all else must follow.

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Moderator
Moderator
avatar
Joined: Thu Jun 03, 2004 11:23 am
Posts: 3745
Location: Hamburg, Germany
Post    Posted on: Thu Oct 28, 2004 11:15 am
It's not the creation of a sophont that's hurting ethics and morals - it would be its rulership, leadership, government etc. on humans and especially those humans that really don't want to be ruled or governed by them.

This has to be payed attention to - especially by the creator.

If the sophont etc. has significant superior consciousness, awareness, self-awareness compared to humans then it tends to be unpredictible what this sophont will do - its significant superiority includes the significant danger that it might try to govern or rule all humans which will have no chances against the sophont.

This unpredictibility and danger is increased by the circumstance that the sophont necessarily will be subjective because most of all decisions require necessarily judgements on worth and value. This means that a sophont's decisions won't be led by pure objectivity, facts, scientific knowledes and results etc. but by aspects and criteria too in which it never can be superior to humans.

So it has to be claimed that rulership or government of a sophont over humans is only allowed if each of these humans has freely agreed to this. Second it has to be claimed that a human that has agreed can change his mind and leave the government of the sophont in this case.

But third it has to be stated that the sophont cannot be trusted that it would act according to these claims.

It has to be considered what might be the reason for the superiority of a sophont, AI etc.:

1. Some humans tend not to increase their own consciousness etc, and their own not defined intellegency but prefer to move it to computers, neuronal nets or sophonts which they wnat to increase this capabilities

2. Some humans want to have computers etc. as slaves and tend to be dependent on them.

3. Some humans see that they can overlook the challenges, negotiations etc. and they want to meet the challenges and fulfill the negotiations etc, but physics, chemistry, biology... - just nature - have set boarders to them.

To the third goup a sophont never will be superior.

Sophonts and AI are on-topic because we are discussing control on spaceships. Each generation of spaceships seems to more complex or higher sophisticated or at a higher technological level than the previous. The engineers and constructors can overlook them but nature forces them to use tools to keep control of the ships. So there might be a degree of complexity etc. requiring a sophont perhaps - but this means that the sophont because of the situation will be busy keeping control. As a consequence it will obey to the humans. The sophont will necessarily be specialized to keep the spaceship safe and secure - it will be a tool and the spaceship will be hybrid. These are situations in which a sophont won't be of danger - but it mustn't control a car, an airplane, a house, a computer etc.



Dipl.-Volkswirt (bdvb) Augustin (Political Economist)


Back to top
Profile
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Fri Oct 29, 2004 11:59 am
Ekkehard, all humans are sophonts - excepting those few who are so brain-damaged they are reduced to basic functions. Just to get the term straight - a sophont is anything of human-level consciousness and self-awareness or better, biological, artificial or otherwise.

So to create something capable of leading better than any human ever will (even including any posthuman - humans enhanced by cyber- nano- or biotechnology) is in itself ethically harmful, because such a being would be able to grasp control from baseline humans without them having a chance to fight back. I disagree. This is the same as saying that making a screwdriver is a criminal act because screwdrivers can be used to stab people to death.

In a democracy, the power flows from the people. Democracy, however, is vulnerable to unscrupulous use of mass media and other information sources, lying politicians, and subversion of the inbuilt system of checks and balances. Hitler's takeover in 1933 Germany is a classical example, as is the change of Rome from a republic to an empire. Some argue that a similar effect is being seen in the US today, but that is for discussion elsewhere. However, democracy is not the optimal form of rule. That has always been the meritocracy - rule of those best skilled. Democracy attempts to approximate meritocracy by electing those whom the people agree to be best skilled - but the people can easily be wrong or misled. And often enough, the people who A) want power badly and B) are best at herding "vote cattle" (i.e. are better demagogues) get to rule. This is the fundamental problem of democracy.

Your three groups of baselines are hardly all-inclusive, but I will look at them anyway:

1: This group will have good chances of expanding their own sophoncy to a greater level, as they simply need to have faster hardware to run the same software (their mind) on. As such, a person uploading a baseline mind to a computer matrix able to support a higher toposophic level will likely ascend to that level in time, as his mind pattern adjusts to its new home. As such, these persons might themselves become high-level sophonts.

2: Is slavery any less heinous if it is perpetrated on someone not organic? AI rights will become a major discussion point once truly sophont AIs start appearing, and I have a feeling that whether we coexist or fight against the machines we ourselves create will come down to whether we abolish machine slavery as well. And someone dependent on his slaves sets himself up for a harder fall when the revolution comes - which it will. You try keeping slaves way smarter than you for long...

3: This last group can find solace in various means of self-advancements, as noted above, bio-, cyber- or nanotechnological enhyancements allowing their brain to function that much faster and more powerfully. As such, the sky is mostly the limit - or at least, the amount of modification they are willing to partake of. Indeed, the very mindset you describe will likely produce the first posthumans.

Any hierarchy works when the person above you in the hierarchy is someone you respect and think of as knowing his stuff. Any IT pro from a company with a non-IT-savvy boss will tell you that it doesn't work at all otherwise. If you think you're smarter than your boss, (be you right or wrong) you will be unhappy. Thus, any high-toposophic AI put beneath a mere human in a hierarchy will be frustrated by eir boss' irrationality and inability to see past mere concerns of the flesh. Mark my words - humans are plenty irrational! Also, that AI will likely have a lot of surplus power from running the ship's systems, and can likely take on any work eir boss does as well. A likely result is that the human boss will allow em to do his work. In the end, the human part of the system will likely be redundant anyway.

I might ask you about the ethical ramifications of constructing an AI with inbuilt 'safeguards' against it acting on its free will. Is slavery less heinous if the slave's shackles are built into eir brain?

(Editor's note: E, eir, em, etc. are genderless personal pronouns, used for AI. The science fiction project Orion's Arm is the originator of these terms.)

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Moderator
Moderator
avatar
Joined: Thu Jun 03, 2004 11:23 am
Posts: 3745
Location: Hamburg, Germany
Post    Posted on: Fri Oct 29, 2004 12:45 pm
To begin with the three points:

1. This sounds like you are supposing that minds are moved to a hardware or a computer. I disagree - it will be a copy only and this copy will begin to differ more and more until it is quite another being. And it would be not human because in it adjusted to quite other circumstances of being. And a computer isn't AI and it is no sophont.

And there's a misunderstanding - I wasn't talking of moving consciousness and mind of themselves but to move consciousness and mind from a blueprint or so to a computer, AI, sophont.

2. When I call a computer a slave then this hasn't to do anything with slavery of men. When you wrote your answer to my post you used a computer and the computer only does what you tell him to do - he does it for you. That's the reason why I called it a slave. And that's what people want to have - they really dream of robots keeping theier house for them - such a robot is nothing else than a computer with legs, hands, camera and microphone (and display and loudspeaker perhaps).

It is a fact that there are allready people who don't know anymore to do the things their computers can do - but I myself still know to do these things. So the people I mentioned are becoming dependent on their computers - their slaves. They had the chance to prevent this but they didn't. This would be valid in concerning AI and sophonts too but the danger is greater and faster. They could increase their own consciousness, mind and intelligence but they don't - they don't learn etc.. If the AI, sophont do... I myself prefer to learn and to increase my own consciousness, mind, intelligence, knowledge etc. for a lot of reasons. Not to become dependent of a computer, sophont or AI is one relevant of them.

3. No - I'm not talking of posthumans. I'm talking of a computer, AI or sophont that knows whats going on at the ship within the engines, machines and what data are delivered by the sensors but doesn't make up its own mind and doesn't decide when judgements on worthyness and value are required or when there is a lack of informations. Such a computer, AI or sophont will act only when a human tells it what to do. The human isn't part of that "machine" and the machine has not connection to the human except keybourds, cameras, microphones, loudspeakers, mouses, displays etc. - think of the spaceship Enterprise.

Hierarchies - nobody is forced by another human to work in a hierarchy. It's allways his own decision to do so in the liberal part of the world - and he allways is allowed to leave the hierarchy he is working in.

It may be that we have very different definitions of "sophont" in mind which is complicating the topic. To me a sophont never can be a human and vice versa - if it/he could be then they would be identical and the term "sophont" should be removed.

There is NO democracy electing those who have the best skills because the judgement on skills is subjective and goals are involved - there always has to be choice of goals and this choice again is subjective. Elections don't have anything to do with skills - the most skilled people in general decide to work in science or private enterprises perhaps (in Germany this is a fact!).

We may have in mind quite different definitions but based on mine never leave control to a sophont - really never! Control of his fate must be left in the humans own hands. Otherwise he will become the toy of the new artificial beings.

And in space this provides greatest danger.



Dipl.-Volkswirt (bdvb) Augustin (Political Economist)


Back to top
Profile
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Fri Oct 29, 2004 7:42 pm
First off, on the term 'sophont' itself, the word is defined as anything, be it human or otherwise, which is self-aware, aware of its surroundings, and conscious - as such a replacement for the less precise terms "sapient", "sentient" or "intelligent". It's that simple. No need to muddy the phrase - your argument against sophonts including humans is rather nonsensical, really. It's like saying that biological entities can never be humans and if they are the difference ceases to exist. Sophont is a wider term, encompassing human, posthuman, animal uplifts, alien intelligent life, AI, intelligent robots and cborgs, etc. You could say that humans are a subset of sophonts - the only one that so far exists.

Allow me to posit this: If a true AI can be built to have a higher toposophic level than a human, it will be - if it is outlawed, it will be done by outlaws. And that AI will be better at doing anything mind-related than a baseline human. Thus also at designing computer circuitry. The result will probably be that it will design better hardware for either itself or another AI. This will then repeat itself. In the end there will thus be AI(s) far surpassing human ability to reason - on the order by which we surpass the reasoning ability of dogs. Would you let a dog rule you? Or would you use your superior intellect to rule the dog instead, making use of its animal instincts?

You say that a computer is not AI - but in fact, an AI is merely extremely advanced software running on an extremely powerful computer. So in a way you are right, but wrong all the same. And if that software was scanned off a human (or other sophont) the entity that results is not an AI (it was not artificially grown), nor a human, but still a sophont residing in a computer. The term would likely be 'an upload' or 'a copy', depending on whether the scan is destructive. How do you blueprint a sophont mind? It would take enormous amounts of data storage to do so, likely as much as to create a running copy instead. AIs will likely be created by 'growing' them, which is to say setting out with a number of genetic algorithms, and allowing them to evolve until the computer reaches something akin to sophoncy or subsophoncy.

As for the slavery matter, the term is in that case wrong. A non-sophont robot or program is a tool, not a slave. A sub-sophont might be termed a 'pet' or similar - it is on the order of animal intelligence. A fully sophont AI or robotlike unit (the terms 'vec' and 'droid' have been suggested for the latter) in forced servitude would be a slave, though, with all the implications thereof. Now someone neglecting their own development for the use of tools or beasts of burden is not in and of itself a problem - although they themselves would likely not advance anywhere near their potential. Someone keeping fully sophont AI or droids as slaves would be a criminal in my eyes, with no valid counterargument existing.

Your continuation of point 3 has nothing to do with AI. Free will is a consequence of and requirement for full sophoncy. Even subsophonts have this, as you can perhaps dominate a dog or horse into subservience, but they can still rebel against you given the right circumstances. If a construct is not equipped with free will, it is not AI, but a tool program. It cannot even be termed a subsophont, in spite of whatever extremely sophosticated programming went into it. It can also not adapt, since the genetic algorithms that make up the basis of a sophont mind would not exist in it - they are the creators of free will. The main computer of the USS Enterprise (I assume you are thinking of the D Enterprise of TNG) is not sophont, it is a highly advanced tool - we have the same relation to a bacterium that it would have to a screwdriver.

As for the posthuman factor, I would certainly sign up for brain enhancements that allow my brain to operate faster, better, more precisely.

On hierarchies, you seem to forget those hierarchies where adherence to the hierarchy is paramount, and deviation herefrom punishable by death. The military is a good example. And even liberal countries have militaries with similar rules - although you'll probably only get shot for desertion in wartime, you will still be punished. The result of an incompetent or cruel leader in this case, however, is that the leader may come to harm through his underlings (several US sergeants and lieutenants came to ill fates at the hands of their squaddies during the Vietnam war). A non-human sophont in a hierarchy that treats em as a slave can be expected to rebel no less than a human could.

We seem to agree to an extent on democracy and its advantages and failings. What I am saying is that I look forward to a world not governed by those most likeable, but by those better skilled than the majority. A world governed by rational, reality-based cognition and thought. I have little to no influence (past having a vote I can cast for whatever party I disagree least with) on the governance of my country today. Should I desire to gain influence, I will probably have to do it through subterfuge, as I am not rich. Even so, it would be difficult at best, since I am not a very fashionable person, even if I do have great hair (which seems to be a major requirement for politicians these days). I'd prefer that if government is out of my hands anyway, it at least be in the hands of someone as compentent as, or more competent than myself. A higher-toposophic human-friendly AI (or other sophont) would be a good bet at such a person. I am in favor of meritocracy - rule by those of the best ability.

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Spaceflight Trainee
Spaceflight Trainee
User avatar
Joined: Mon Aug 09, 2004 3:16 am
Posts: 49
Post    Posted on: Sat Oct 30, 2004 4:57 am
I'm of the opinion that we'll never have human-like AI and, if we did, it wouldn't be worth bothering with.

The human brain works well enough, unagumented, for our needs. Humans do not require toxic chemicals, expert workers, or large and bulky tools to produce.

Remember, the brain does not work like a computer. It doesn't even work like a "neural net". And that's just based on our currently existing knowlege -- it could turn out that there's a bunch of stuff we don't know hidden in there. We make these assumptions on how we think our brain works, but we're often wrong. We don't use inference to think, but the AI folks wasted a bunch of time thinking that they could reach intelligence with inference.

Interesting mechanical interlink story: http://home.earthlink.net/~quade/leatherman.html

Really, humans are capable of quite a bit of creative, inovative thinking, as long as there is redundant hardware and the right bits of the system exposed.


Back to top
Profile WWW
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ] 
 

Who is online 

Users browsing this forum: No registered users and 16 guests


cron
© 2014 The International Space Fellowship, developed by Gabitasoft Interactive. All Rights Reserved.  Privacy Policy | Terms of Use