Community > Forum > Technology & Science > piloted flights, automatic flights - hybrid?

piloted flights, automatic flights - hybrid?

Posted by: Ekkehard Augustin - Mon Oct 18, 2004 9:30 am
Post new topic Reply to topic
 [ 19 posts ] 
piloted flights, automatic flights - hybrid? 
Author Message
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Sat Oct 30, 2004 12:09 pm
Wirehead, yours is the other big theory on future development, which mathematician Vernor Vinge touched upon in his description of the 'knowledge singularity'. It is based on the idea that the most complex brain-like system sophonts can create are less complex than the ones they themselves possess- In that regard, technology will likely reach some sort of plateau, where the human brain simply cannot cope with the problems of increasing technological development, and we cannot build any brain more sophisticated, nor improve on our own. This will mean that technological change will begin delivering diminishing returns, and so technology will stagnate. The end result will probably be that humanity will live out its time, then stagnate, and die. In such a scenario, I say that we have perhaps 10,000 years left as a species... But that's just my guess.

It is true that the mind does not work in the same way as a normal desktop computer. However, quantum computing, an area under strong development just now, is delivering results that seem to indicate that such a system does have many similarities to the human brain's neurons. A large enough quantum computer or network thereof running genetic algorithms might in the end create viable AI. We cannot say - it is a matter of experimentation as yet. IMHO as a CS student (a Danish one - from DIKU) AI is not only a possibility, but indeed a certainty.

I am of the opinion, myself, that if we can improve our available brainpower, we are morally obliged to do it for the betterment of all.

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Moderator
Moderator
avatar
Joined: Thu Jun 03, 2004 11:23 am
Posts: 3745
Location: Hamburg, Germany
Post    Posted on: Sat Oct 30, 2004 1:09 pm
Hello, wirehead,

the human brain is one of the sources of the orign of neuronal nets. Each cell of a human brain is connected to hundreds or thousands other cells around it and it is reacting to signals of most of these surrounding cells in parallel. While recating it is sending signals to much of these other cells too. At a certain level of these activities a human becomes conscious of thoughts, emotions and more. The construction of neuronal nets is imitating this - a neuronal net consists of sveral processors connected to another like the cells of a human brain.

And, Autochton, these processors are not provided software like PC programs - nor is the neuronal net as a whole. There is no such thing - neuronal nets have to be trained like humans have to trained. Neuronal nets learn their capabilities by being trained.

From this follows that a human mind, consciousness etc. cannot be moved or copied to a neuronal net. And there is are substantial obstacles to do that - a human brain has around 10E+19 cells - suppose 10E+9 processors to be working at the whole world today. Then 10E+10 times these number of processors is required to construct one neuronal net having human capacity.

But todays neuronal nets are considered to be not working well - they need to be assited by normal computers and their programs - there's a german scientific article of my discipline on it. There were an experiment on using neuronal nets for forcasting exchange rates.The results are indicating that the nets are judging the future highly subjective because of the data their training has been based on.

Last I know on neuronal nets is that there is no mathematical understanding of them yet - all that's known is, that they are working in principle. So I don't want them to control a space ship or to govern something - but they can be used as ONE kind of consuktant TOGETHER with other kinds.

AI I do know by my profession as data mining tools based on data warehouses. These data mining tools in total are mathematical and statistical methods that are partly high sophisticated and neuronal nets.

Given a certain number of processors and a certain complexity of their connections a neuronal net might get consciousness in the future perhaps - but this too means that it has to learn during its whole existence.

Another problem is the difference between a dog and a human - a dog doesn't have a consciousness or awreness like a human. That's the reason why your arguments are vaild in their case. Take a primate like a Gorilla etc.then we have to do with a low-level consciousness and awareness below the human level - this is scientifically reserached to some degree. To handle these primates is significantly harder than to handle or control or so a dog. These primates to a high degree bahve like humans - they do betray another for example - and they betray men too. They leran to communicate to men by language (manual signs and signals) and they show human-like emotions.

All this is indicating that governing a dog by a human cannot be compared to governing a human by a superior other being - because of the difference that a dog isn't conscious (proved) but a man is.Tthe second case would be cruel and crazy in a manner the first case isn't. This one of the reasons why creating chimares is forbiddeen at least in Germany.

I have no problem with your defintion of sophont - but there may be serious communication problems between several kinds of sophonts which is adnager as we know from other communication problems.

At todays level of research and knowledge on the hman brain and neurobnal nets and AI nothing of this should have total control of a space ship - the risk that their judgments are right concerning themselves but wrong concerning the human crew is much to big. And future research on all this could reveal that this never will decrease but increase perhaps - we have to wait for that.

No Soyuz, no Shuttle, no SS! etc. should be under complete control of one of these and man is better to improve himself as far is it is possible to him. This is differing from individual to individual I suppose.

But AI and neuronal nets may do goog service in providing consulting and warning - this would be a good hybrid concept. Teir should be no direct connection between man and machine as long as we don't be quite sure that this doesn''t change or manipulate man's capability of right human-oriented capability of judgement.

Use of AI and neuronal nets only if man's capacity of brain is exhausted...



Dipl.-Volkswirt (bdvb) Augustin (Political Economist)


EDIT: Hello, Autochton, to answer to the answer you have been posting while I was writing mine: This other theory seems to not taking into account the enduring evolution of the human brain. Noone knows wether it's really going on but thr research of the past four to six million years of human evolution is indicating significantly that the human brain has been growing in reaction to the tools, methods, techniques and technologies the given capacity of the human brain invented or creted. To me it seems that there still is evolution because scientists and engineer are going on to find additional tools, methods etc. and unsolved problem they apply to too. We are keeping on learning. Currently this might be causing reorganization of our brain mainly and orientation to solving problems by social organization. But once this reaches tthe boarders of its capacity our brain may do the next evolutionary step. This too is expected when small goups of humans begin to colonize space becuase they are faced to new circumstances a man never had to manage before.


Back to top
Profile
Spaceflight Participant
Spaceflight Participant
avatar
Joined: Mon Jun 14, 2004 8:48 pm
Posts: 55
Location: Copenhagen, Denmark
Post    Posted on: Sun Oct 31, 2004 3:21 pm
You seem convinced that humans already possess the ability, by default, to judge each other right. Sorry for asking, but what on Earth gave you that idea? Been watching TV lately? There is an excellent counterproof of that thesis. Civilization is learnt behavior - our default is to screw each other over and be egotistic in extremis. Nothing is quite as inhumane as man to his fellow man. It is our primary weakness.

An AI, unfettered by immediate concerns of survival, does not necessarily have this weakness. Whether it does or not is a matter of research. An AI, depending on its 'upbringing' can be as civilized and human friendly as any human - or as vicious and destructive as any human. The trick is to make sure that the strongest of them are omn our side...

As to your brain cell numbers, you miss a few important points: First off, the brain is not exclusivelyt built up of nerve cells. There is fatty tissue, blood vessels, etc. as well, which detract from that number. Secondly, not all the nerve cells are actually used for thought processes - and estimated 10-20% perhaps, are. Thirdly, there is little correlation between one computer processor and one nerve cell. A single silicon-base processor is stronger in some suits and weaker in others than a human neuron (with the assumed infrastructure in place for both). However, a quantum computer is theorized to be able to achieve more processing power than any one neuron in a human brain, and possibly whole clusters of them. This, I feel, offers the best possibilities, also because quantum computing is displaying some of the elements of organic computing: Real randomness capability, intuitive computing, etc. It is the wave of the future, if anything is.

You say that a man governed by a being as superior to him as he is to his dog is cruel and crazy... But why? After all, to the man, the superior sophont would be inscrutable, but would look out for him and be nice to him. You'd be surprised how many humans would prefer this to actually having to make their own decisions and taking the consequences thereof. A man whose dog hates him cannot control tat dog either - it will bite him, ruin his belongings, and in general make a nuisance of itself any way it can, in the end, likely running away given the chance. A rebellious human governed by a comparable superior sophont he hates would do very little different - but perhaps his rebellion would cause more collateral damage. Lastly, given that we're on the verge of space, anyway, there's a lot of places to go where you'd be the highest sophont around if you can't abide the thought of something non-human being smarter than you... :)

_________________
Autochton
- "To the stars! And BEYOND!"


Back to top
Profile ICQ YIM
Moderator
Moderator
avatar
Joined: Thu Jun 03, 2004 11:23 am
Posts: 3745
Location: Hamburg, Germany
Post    Posted on: Sun Oct 31, 2004 5:56 pm
I didn't read your post in whole right now, but your initial words are indicating that I should answer at once.

I'm not convinced that humans are already able to judge each other right - in contrary I suppose very seriously that they never will be able to - and no other being too will be able in future or has been able in the past.

I'm supposing that because of repeated and long philosophical thinking on this. Much too much to post it here but I'll try to say it shortly.

First such things are a subtopic of my scientific discipline. In some normative theories complete information of future and past, of other customers of suppliers, of products etc. is assumed to get conclusions from these theories. Under these conditions right judgements are sure - and they seem to include right judgements of each other. But in reality most of these assumptions are invalid - which really means that judgements necessarily tend to be wrong. That's one of the sources of risk.

The reason for the assumptions being wrong in most cases is nature - physics, species being numerous and very different and the like. I'm moving away from my discipline writing these words.

Second from neurology and psychology it's known that it is extremely difficult to get insight to each other - the other human can hide his thoughts, emotions and much more and somene has to use tricks and instruments to get rid of that.

Third neurology couldn't get true understanding of brains until now and neuronal nets are not understood yet as I reported.

This list doesn't claim to be complete. But at least the first point is valid for each other being too - including AI and sophonts...

The human fault seen under these aspects is to try to judge right - we better should accept that we are not perfect. That means each of us should be conscious that he himself isn't perfect and the other isn't perfect too. Then nobody would try to be one he never can achieve to be. Each human would be more realistic - and by this would learn more and better and evolve better. Nobody would need AI or sophont then as extremely as he currently does. Most of the requirements of these technologies resulted from insane doings, desires and claims in the past. We should learn that - then things would go on better and we would extend to space easier.

This is very very short - don't forget.

It applies to hybrid spaceships very well - AI, neuronal nets and sophonts never can be perfect in controlling space ships. So control should be left to ourselves to a significant degree - we are the owners of that ships (Property Right - think of that...). If the pilot and the crew ever keeps in mind the imperfectness they will act much more corerctly than if they do not or leave the control to AI, sophonts, neuronal nets or computers.



Dipl.-Volkswirt (bdvb) augustin (Political Economist)


Back to top
Profile
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ] 
 

Who is online 

Users browsing this forum: No registered users and 21 guests


© 2014 The International Space Fellowship, developed by Gabitasoft Interactive. All Rights Reserved.  Privacy Policy | Terms of Use