• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Priorities: Biotech, Computation, Nano, etc


  • Please log in to reply
51 replies to this topic

#1 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 14 May 2005 - 12:22 AM


Recently Don Spanton and Nate Barna hit on one of the topics I struggle with almost daily. To summarize: Don has taken the position that biotechnology is the route by which the low hanging fruit can be harvested to accelerate progress towards increased longevity the fastest. Nate takes the position that increasing intelligence is the more profitable route. Given a common goal set of increasing personal survival, I'm curious what people think about where the critical levers are in the science/engineering/computation fields.

Currently I work very much on the biology/bioengineering side of neural interfacing, but at the end of the Summer, I will probably jump headlong into the computational world (I'm most likely going to transfer to a PhD program in AI). From my assessment of neuroscience we are at a point where we don't need a great deal more information to begin robustly modeling cortical information processing. I see understanding this type of processing as one of the key problems that will open gates in multiple fields (increased human intelligence, AI, and hence all other fields of human endeavor - especially those fields that require understanding of highly complex systems).

To give you my personal perspective: I was trained as a molecular biologist (MS in molecular neuro), worked as a biotech analyst for a while, have been in neuroengineering for three years, but now am convinced that computational modeling of large scale biological neural networks might be the field where I can make the greatest impact. A lot of this has to do with my particular strengths, which play towards synthesizing large amounts of data, but I think there may be a shortage of system level thinkers in many fields which may be slowing progress on all fronts down.

In addition, I believe biotech in particular suffers from a lack of systematic thinking with regards to how experiments are carried out. Efforts such as Aubrey de Grey's SENS and the up and coming culture of bioengineering will likely change the outlook somewhat, but the current culture (slow progress due to over-regulation, lack of systematic thinking, lack of investment of research dollars in critical targets) is likely to not go down without a fight.

I will be interested to hear people's opinions

#2 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 14 May 2005 - 03:40 AM

I plan on earning an M.D. in molecular/cellular biology and Ph.D.s in both Bioengineering and Nanotechnology; from the University of Washington. My goal is to work towards a cure for aging (hopefuly I will be able to work with Aubrey De Grey). When that is well on its way or almost finished, I plan on becoming the world's first nano-Doctor.... a practitioner of nanomedicine. I believe nanomedicine is the most important thrust area. Why? Because the furthering of nanomedicine will also help the A.I. and neurobiological enhancement arenas. Medicine has always been the spuring factor in technological development. If we develop nanoscience for the benefit of medicine, we will also further the nano materials and procedures that are essential for true AGI. I believe we should develop nanomedicine in congruency with AGI so human intelligence can benefit from AGI, directly. We are a very creative species, who knows how long it will take to develop a truly creative AGI. That is my two cents.

#3 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 14 May 2005 - 05:32 AM

critical levers

I find it encouraging that someone with an actual background in cognitive science thinks there's any chance in AI. I've never had the imagination to see such a chance. In my eyes there seem to be too many orders of magnitude in complexity missing. But I don't know what I'm talking about. Can you briefly outline your basic strategy for me?

I completely agree that the biggest obstacle to LE progress on the biotech front is people's cultural attitudes. That is not only overregulation from the outside, but also disinterest in applications and utter reluctance among bioscientists to reflect on their goals in general. I have the feeling that people go into AI research because they want to create AI, while people go into biochem because they have the vague feeling that it might impress the opposite sex and bolster their own self-image.

From what I've learned about cell replacement I get the pretty clear impression that the goal-targeted perfection of this technology can get us near-indefinite life extension, if only substantial numbers of qualified people started wanting it. (With some aggregate removal -- maybe.) No human and no AI need to understand the complexity of the cell or the mechanisms of how we age in any substantially greater detail than we have today, in order to do it. All I can see it takes is extensive R+D into methods to get rid of wild-type cells, make replacement ones from the germ line and limited amounts of tissue engineering. This is not the creation of anything substantially new, but merely the adaptation of existing technology for each and every tissue of the body.

sponsored ad

  • Advert

#4 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 14 May 2005 - 06:13 AM

Hello, Peter. It’s nice to see you.

ocsrazor Given a common goal set of increasing personal survival, I'm curious what people think about where the critical levers are in the science/engineering/computation fields.

For the individual, I think it all depends on perceived windows and which transpersonal-based needs are worth potentially sacrificing and which are not.

I should note here that Don’s and my dialogue began with the basic disagreement about the pragmatic significance of biotechnology, specifically in the areas that receive the most heat from bioconservatives. The disagreement surfaced only because I was spontaneously struck one day by confusion as I was wondering why there’s all this displaced passion against Kass when the marginal gain he blocks doesn’t seem to justify a preoccupation with his moves, especially since they can be dodged.

But that disagreement quickly evaporated into oblivion when I realized it’s none of my business if people want to advance stem cell research and therapeutic cloning, even when they know that political forces are perhaps as great of a barrier as discovering solutions. That is, people advancing biotechnology do not violate, in any detectable manner, my inherited situation or value set. All I can do, and not that I really need to or should do it, is suggest where I think some efforts should be redistributed, given certain perceived needs, and then back off and not expect others’ needs to match my needs if their course doesn’t non-nonnegatively affect mine.

I acknowledge there are other areas of biotechnology whose obtained solutions are highly valued and not relevantly contended. Therefore, my blanket judgment “biotechnology is a waste” was a mistake.

That aside, I still think that biotechnology doesn’t have the potential to solve as many problems falling under the class Human Condition as nanotechnology and AGI (both of which don’t really need biotech beyond neuroscience). However, in the Human Condition problem class, if the few problems which can be solved with biotech are one’s only main concerns, especially those near-term time-sensitive problems, then a one-step problem-solver-of-the-human-condition has absolutely no business patronizing bioengineers or biotech enthusiasts, except maybe over a friendly cup of tea and demanding, “Tell me… just FMI… how can you be indifferent to cognitive science at such an exciting and opportune time like this, for Christ’s sake?!”

#5 kraemahz

  • Guest
  • 157 posts
  • 0
  • Location:University of Washington

Posted 14 May 2005 - 06:46 AM

justinb
Ph.D.s in both Bioengineering and Nanotechnology; from the University of Washington

Stole my idea! Maybe we'll see each other there in a few years :).

#6 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 14 May 2005 - 06:49 AM

Well said Nate. You have a talent to analyze what's up and nail down what matters. The only issue I'm having with this is that in my opinion you overestimate both the resolve and the strength of our bioconservative friends (or underestimate our's, for that matter).
And I would like to answer your final question by a decisive
"We have real people suffering and dying out there. To stop that is my foremost priority, and biotech gets us there fastest. When that's done, I will gladly join you to make all the cool stuff real."
I agree that this is a matter of personal preference.

#7 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 14 May 2005 - 07:27 AM

John Schloendorn Well said Nate. You have a talent to analyze what's up and nail down what matters.

I appreciate the kind words, John.

John Schloendorn The only issue I'm having with this is that in my opinion you overestimate both the resolves and the strength of our bioconservative friends (or underestimate our's, for that matter).

According to my assessments, they tend to have high stakes in other transhumanist-friendly technologies, such as all those which don’t violate souls in human embryos.

John Schloendorn And I would like to answer your final question by a decisive
"We have real people suffering and dying out there. To stop that is my foremost priority, and biotech gets us there fastest. When that's done, I will gladly join you to make all the cool stuff real."

I understand, but please note:

Nate Barna However, in the Human Condition problem class, if the few problems which can be solved with biotech are one’s only main concerns, especially those near-term time-sensitive problems, then a one-step problem-solver-of-the-human-condition has absolutely no business patronizing bioengineers or biotech enthusiasts…

Yet society, in general, seems to value financial security and economic freedom more than not dying or being in over-indulgent, hedonistic states. I agree that suffering and dying are bad, but if you view those problems as the most urgent as a result from believing that most others should view those as the most urgent, and if most others don’t view those as most urgent, then you’re incorrect in believing that suffering and dying are the most urgent problems with which to be dealt.

#8 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 14 May 2005 - 08:37 AM

Not saying it's urgent by any social standards -- personal preference. I'm not here to do the bidding of the perceived social consensus. If so, I'd better go make some action movies or win some football matches for the local club (darn, where is local?).
Anyways, on my personal urgency scale (which also factors in costs, i.e. timescales and the difference I can make -- the low hanging fruit) involuntary suffering and death of sentient creatures have some overriding priority over other aspects of the transhumanistic vision that may be synergistically built on top of it. Honestly, I don't see much of an antithesis here at all.

#9 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 14 May 2005 - 08:45 AM

Aside, I do believe that once they can buy rejuvenation therapies on the counter (or local hospital for that matter) nearly everyone is going to come around to share a very similar view. They will look back on us like we do on the guys who said "wash your hands before you help with childbirth" in the 1800s. But this should be, and is unrelated to my motivation now.

#10

  • Lurker
  • 1

Posted 14 May 2005 - 08:53 AM

Yet society, in general, seems to value financial security and economic freedom more than not dying


Only because society does not believe extended lifespan to be an option as yet. The moment it becomes known that it is possible to extend lifespan - say to live for an additional 50 years - then it is very likely that most people will opt for the extended lifespan no matter the cost.

#11 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 14 May 2005 - 09:43 AM

Stole my idea! Maybe we'll see each other there in a few years :).

Seriously? That is awesome. I am going to start undergraduate studies this fall, so I am a long ways off from being there, if I am accepted. Where are you, educational wise? When do you plan on working on your Ph.D.s?

#12 kraemahz

  • Guest
  • 157 posts
  • 0
  • Location:University of Washington

Posted 14 May 2005 - 11:06 AM

Sorry I didn't reply to your IM, I was playing a game [wis]. I'm only a year ahead of you, 'tis closing in on the end of my first college year, and the time for me to apply for early admission to the BioEng major *crosses fingers.* I'm in it for the long haul :).

#13 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 14 May 2005 - 01:13 PM

John, if it’s your personal preference (which I think is fine), then the conditional doesn’t apply, and you’d be right that there’s no antithesis.

Prometheus, I agree completely. I just had assumed John was trying to establish what he thought was urgent based on actual social values of the present rather than extrapolated social values of the future. Therefore, actual social values of the present were stated.

#14 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 14 May 2005 - 02:48 PM

There is a lot to discuss here in this thread and I am very glad at the general tone and exchange I've read so far but let me outline one of the great difficulties you have in a world of myopic self interest yet lacking vision; to sum it up quaintly, "you are all generally a day late and dollar short".

I am one of the elders of our little group, so I can, along with a few others give some insight into the mind and attitudes of my fellow *Boomers.* Biotech is not something they trust and nothing in the long range strategy serves their specific self interests as the probable successes are for our children at best and offer far too few specifics that appeal to the most needed investors of my generation. And don't forget we grew up seeing everything from the Tuskegee experiments to watching the Boys From Brazil. They don't even have the *trust* for their physicians that our parents had. Hell one of the most common complaints that I am beginning to hear is that they can't even communicate with their doctors as so many are not only too young but are *foreign* to boot..

Let me add to this dilemma that there has already been one delay in the systemic transfer of wealth between generations that was the result of the WWII generation living far longer than expected and staying in power from the political arena to the corporate board rooms. This resistance to change was also manifested in the direction of industrialization, development and how new avenues were defined and impacted the kinds of disorganization that Peter refers to.

It helped destroy the Space Program from within vested interests, blocked alternative energy development and even corrupts the development of advanced strategic policy with respect to the global socioeconomics and political situations we face today. Dinosaurs are still in charge and definitely control the purse strings. There are many examples of this but another dangerous competition is brewing between the Gen X'ers, those a little younger and my fellow Boomers, a competition for power and *seniority* that is not just in the work place but also in the Board rooms, Banks, and in Government. Let's get this straight, Boomers outnumber all of you, they do vote and currently are inheriting the delayed wealth of the WWII generation but they are not prepared to build visionary programs for biotech, they are uneducated as to the realistic potential and generally do not trust the options being offered to them. This is a serious memetic problem not just the more practical problems that Peter initially outlines.

Many of you here represent the finest of the youngest generation now entering this struggle but I must warn you that you are not truly representative of the mainstream for your generation. It is very dangerous and perhaps misleading to make too many assumptions based on the company we keep here.

OK back to the core discussion biology vs. IA (Intelligence Augmentation), I agree that we have not yet winnowed the very best strategy and tactics for achieving either and frankly any serious student of these realizes quickly that they overlap, as do the methodologies, from genetics and nano to computer and/or material science with med tech and needed skills.

There is a real and dangerous lack of serious global yet critical thinking and all too little demonstrated ability by professionals for successfully integrating their various fields of expertise even with the loud and numerous calls by many within academia to do so. There is even a very dangerous and atavist resistance all the way to the highest ranks of society toward the objectives we propose.

It would be inept to make ourselves vulnerable by underestimating those we do not respect by simply dismissing them as ignorant, or are you not paying attention to the debacle that is playing out in Kansas over the teaching of evolution in the schools?

There is a core discussion here that needs to be the major focus for us IMHO both in terms of long and short term pragmatic goals but also there is a related need to be able to present our best proposals as packaged memes for both the common public for educational benefit and to help promote politic support, as well as to lobby government, business, and academia with specific (targeted) proposals that are profitable, shorter or more defined, and the constructive systematic building of a broad, more strategic base of support that incorporates or institutionalizes itself across a greater spectrum of interests.

These are really very different yet overlapping discussions and it is no small irony that the latter describes a *social* element of longevity, which I suggest we need to face early on because what we are fighting to a certain extent is the result of earlier generational success toward achieving the very goals we define for our effort. The fact that people are living longer already has generated an even greater resistance to change than almost ever before and if we are successful long term this could be our own current generations (not necessarily ourselves) in the future resisting progressive change then with an even greater force of inertia than can be brought to bear presently.

Our success will mean the likely creation of even more powerful institutionalized inertia eventually both technologically and economically with which to resist progress. We would do well to confront this defeatist meme early on with one that usurps the more popular manifest destiny model but I frankly find the whole thing disturbing and can dispassionately suggest the logic of such a proposal while I am personally discouraged by its apparent necessity. I guess I want to have a higher opinion of people but find such optimism the source of repeated disappointment.

Nonetheless the ability to manipulate the masses is being turned against us and if we do not meet this challenge early on and in the context of the conflict, we will lose the battle for Heart and Minds. I must warn everyone that if we lose that battle it would be a set back of potentially staggering proportion and could result in far more draconian tactics for all concerned as the result.

Back to issues of intelligence versus biology. I am supportive, interested and encouraged by the basic science I observe and the low hanging fruit argument that folks like Don and John propose but look at this a different way for a moment, I am among the younger of the Boomer generation and that fruit simply won't ripen in time to be picked by most of my generation and if you want to get more than their civic minded spirit or concern for their children (more likely now grandchildren) to be their motivation for interest then you had better repackage that fruit, get it to ripen quicker, and DEFINITELY get it to market quicker or we had better find an alternative means of reaching out to them because their attention span is still short and it is switching back to TV fast.

I suggest that there would be more interest for my generation in IA than many here might at first think is possible not as a form of simple cybernetics but as time capsules for their personas. It blends with the memes they are familiar with and can be used as a methodology for garnering support for IA that allows the integration of BCI with advanced AI.

However this will in all likelihood be a lateral development to Strategic AI that is already being developed by the DoD for military purposes and should not be assumed to be *Friendly*. So in a sense a race is on for AI development but also there is going to be a lot more investment capital placed in that sector initially as the results are more immediate and the goals more defined.

Also there is already a noticeable resistance to some types of Biotech advance as they are not initially profitable to a pharmaceutical industry that sees things like vaccines as competitive with antibiotics and so has traditionally underfunded them. Frankly the short term view is that if we for example were to succeed at making many people more resistant to disease, (and worse smarter) that this would drastically cut into their immediate profits by diminishing the market sector that they *exploit* by increasingly competitive means.

The irony is this is VERY short term thinking because obviously a many of us already understand this would also mean that longer lived people would have a greater need for their products and provide a larger and more reliable long term market as their need for this sectors' products would increase over time, not decrease; especially as more people around the world achieved the minimum levels of wealth and education necessary to appreciate and avail themselves of the advantages from the biotech industry.

However it is the Boomer's that currently control the pharmaceuticals/biotech industries and we need to get back to the reality I gave earlier, even they do not think that biotech advances are going to benefit them specifically in the amount of time they have left in a sufficiently demonstrable manner. We are not going to reverse senescence in time to sell that strategy to most of my generation without a breakthrough that either hasn't yet happened or hasn't yet been realized (recognized for development). For example even the best case scenario on Stem Cells and Genetics, assuming we were able to sweep away all irrational social resistance and develop a modern biotech version of the Manhattan Project, is still measured in decades. Decades that represent a level of inevitable aging for my generation suggesting it is futile to rely on the prospects from that avenue.

BTW, please don't shoot the messenger in this instance as I am only telling it like it is not telling you my opinion of how it should be. I don't happen to be in personal agreement with my peers but I am not ignorant of how they think. We need AARP and other groups of this magnitude to be in line with our objectives, lobby and coordinate R&D and then as a combined force move to advance in a theater strategic manner.

Additionally it is also my observation that we need targeted *products* that can accelerate progress, be attractive (garner and hold positive attention) for my generation. In my opinion IA (Intelligence Augmentation) is such a product; largely because we don't need everyone to agree with this strategy, we only need a sufficient initial level of success from which to compound and build upon with successive ones. Success is its own reward in this respect.

If we can accelerate developmental intelligence, manage the potential ego and social conundrums this strategy risks, while also encouraging these initial IA Adepts to help foster a broader and more comprehensive integration of the necessary technologies then I think we might be able to boot strap early successes into an unstoppable momentum that can capture public support. My generation has already witnessed a remarkable level of computer tech advancement and is confidant that shorter term results are possible from that field even if most of them do not understand what they are seeing. They have already developed this *faith*.

Most of the younger generations IMHO shouldn't allow this kind of technology to be experimented on them as they have a considerably higher probability of healthy lives in front of them but my generation is facing catastrophic death and mental deterioration and the risk/reward (cost/benefit) relationship for this technological avenue is very different. If for example even a passive form of cerebral scan could advance uploading to a person facing Alzheimer's then I suspect they would jump at it even if the life expectancy of their body were shortened as a result.

In fact rather than facing a slow and excruciatingly long period of continuous mental decay that adversely impacts family/friends and the self I bet that a consensus would rapidly be reached that people would prefer to live a shorter but vastly more rewarding period as a race against their own demise by contributing something important as an IA with the potentially richer personal experience and assurance of their memory being held not only intact for that period but converted to a form reflecting the potential of uploading to be preserved indefinitely.

This approach to IA is analogous in some respects with the risks and rewards of other forms of life sustaining Bio-mechanical developments like the Artificial Heart or even Dialysis Machines and if a significant successful technological approach or methodology for it can be demonstrated early on then a lot of investment will follow toward ends the that we can then perhaps be ready to help define and integrate with biotech's low hanging fruit.

#15 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 14 May 2005 - 04:36 PM

I'm glad that you opened this thread Peter. So far there's been a lot of very interesting dialogue.

Nate,

One of your general observations/contentions is that the bioconservative camp has specifically opposed biological avenues towards enhancement. You futher reason that this is the result of various "social inhibitions".

And I agree with this assessment, but I do not think you are adequately addressing where these social inhibitions come from.

These articles by Blackford gives a general outline of the position I have taken on the *source* of the biocon opposition.

The Supposed Sin of Defying Nature I

The Supposed Sin of Defying Nature II

The reason that biocons do not oppose the pursuit of AI or IA (they actually do oppose IA to some extent, but I guess a discussion on neuropharmocology would be going too far astray) is not because they are indifferent or even supportive of this area of research, but because their assessment of future progress has left them with the impression that such efforts are mostly futile and that the chances of success in these areas are negligible. If society began to witness the wide spread use of neural implants, I guarantee you that biocon opposition would manifest itself.

Again, this comes down to what Blackford considers is their perception of the natural order, which can be thought of as a social construct stemming from our evolutionary psychology and designed to maintain a perceived level of "safety". As such, the biocon opposition stems from anything which could be viewed as causing wide spread and abrupt social upheaval or that alters the base line assumptions upon which society operates. Obviously topics such as reproduction and *the cycle of life* are hot button issues for the biocons, but it would be a mistake to think that they will limit their opposition to these areas.

#16 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 14 May 2005 - 04:50 PM

From my assessment of neuroscience we are at a point where we don't need a great deal more information to begin robustly modeling cortical information processing. I see understanding this type of processing as one of the key problems that will open gates in multiple fields (increased human intelligence, AI, and hence all other fields of human endeavor - especially those fields that require understanding of highly complex systems).


Such an assessment from someone with your qualifications Peter makes me giddy.

There seems to be a few positions that you have taken here (implicitly) that I am wondering if you could elaborate on.

1) Do you believe that in the medium term (<30 years) real AI can be designed which will be able to model complex biological systems more effectively than a human mind can at present?

2) Do you really believe that IA will be feasible and utilized by systems engineers in the medium term (again <30 years)?
-----------------------------------------------------
Also I am curious...

Do you believe that it will be necessary to achieve ENS (regardless of whether this is accomplished by the utilization of traditional human intelligence or AI/IA) before a transition to an alternative substrate can be engineered?

or

Do you believe that a tranfer over to an alternative substrate can be achieved prior to the accomplishment of ENS?

[Note: The last proposition I am highly skeptical of]

#17 signifier

  • Guest
  • 79 posts
  • 0

Posted 14 May 2005 - 05:20 PM

Artificial intelligence. However, I won't be working on anything in that field... I despise sitting down and programming. But I will be doing everything I can to support the development of artificial intelligence and movement toward the singularity.

#18 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,068 posts
  • 2,000
  • Location:Wausau, WI

Posted 14 May 2005 - 05:25 PM

It is hard to predict which mode of investigation will reap the most results because they so inter-related and will become more so in the near future. My feeling is tha biotech would have the most impact in the next 1 to 5 years, as society (other than extreme bioconservatives and green luddites) seems very willing to accept treatments, pills, and surgery to make them better. After five years, AI/IA is a huge wild card, with the most potential to disrupt....everything....hopefully for the better.

#19 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 14 May 2005 - 05:34 PM

If for example even a passive form of cerebral scan could advance uploading to a person facing Alzheimer's then I suspect they would jump at it even if the life expectancy of their body were shortened as a result.


A cerebral scan would likely escape the biocons' wrath by its lack of significance. Such technology does not extend anybody's life, it just creates a record. The person still dies. Perhaps acceptance of this technology could be a wedge to open up people's minds to other technologies, is that what you meant?

I think you struck a critical point here:

Additionally it is also my observation that we need targeted *products* that can accelerate progress and be attractive (garner and hold positive attention) for my generation


Some aspects of SENS are less controversial (such as junk removal, which doesn't necessarily require stem cells) and could be made into products. Success here could pave the way for greater acceptance of the rest of SENS. I think the biocon perspective will crumble in the face of the potential for huge profits... the most important objective for life extension then is to demonstrate that potential.

#20 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 14 May 2005 - 06:03 PM

DonSpanton One of your general observations/contentions is that the bioconservative camp has specifically opposed biological avenues towards enhancement. You futher reason that this is the result of various "social inhibitions".... And I agree with this assessment, but I do not think you are adequately addressing where these social inhibitions come from.

Hi, Don. I presupposed what I perceived as a commonsense assumption: social inhibitions are a result of fear. By relating the idea of “inherited situations” to sentient agents, I gave reasons for not digging further into the social inhibition factors. At this point, I’m uncertain which of my assumptions you think I should update.

#21 antilithium

  • Guest
  • 77 posts
  • 1
  • Location:Tucson, Arizona

Posted 14 May 2005 - 06:26 PM

I believe computation as the highest priority of *overall* technological development. Because without it, advance biotech and nanotech would be infeasible. I also see that society is becoming more entrenched with computational systems and networks. Look at Max Wi-Fi, heck newer cars have "programmable" intake ratios and transmission. Everywhere I go, some device is embedded with microchips. And most people never realize this...

What I'm saying: is computation leads to better efficiency of system management and communication. I wouldn't call it "AI" per-say. In fact, I still believe we're a long way to developing a self-automatise entity with abstracting capabilities. Having more reliable software is more important in the short term.

If you ask me: the closest thing to AI within the next five to ten years would probably be expert systems. And I say this because *most* AI researchers are working in the interest of larger organizations. Could you use a fully sentient & sapient entity as a tool? I doubt it. And if one did, many would call it slavery. I don't see the benefit of creating true AI. Most talk about "friendly" AI and how life is just going to be butterflies and daisies. If *real* AI is developed, is it going to cuddle us for all existence? I see the implications that AI could lead to bigger and better things. However, I ponder the motivations of such an entity. Are we its highest priority?

Summing everything together, computation would most likely be the biggest factor of technological drive. Organizations would use AI research to create better and more stable software, not AIs. Which will faciltate the development and manipulation of nanotech and biotech.

P.S.

I'd like to say that this post is bias... On the grounds that I'm a techie majoring in computer science. ^_^

#22 kraemahz

  • Guest
  • 157 posts
  • 0
  • Location:University of Washington

Posted 14 May 2005 - 07:18 PM

antilithium
I believe computation as the highest priority of *overall* technological development. Because without it, advance biotech and nanotech would be infeasible.

But where would CS be without the electrical engineers to design the transistors they work with? Where would be the physical applications without the mechanical engineers? How would you know how to even write the code for specific applications without a physicist or biologist breathing on your neck? CS is just a piece of a much larger technological puzzle with pieces that are codependent. Most other engineering fields require a basic introduction to programming, CS requires an introduction to formal science, for the purpose of that interelationship.

#23 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 14 May 2005 - 08:04 PM

A cerebral scan would likely escape the biocons' wrath by its lack of significance. Such technology does not extend anybody's life, it just creates a record. The person still dies. Perhaps acceptance of this technology could be a wedge to open up people's minds to other technologies, is that what you meant?


Yes it is Osiris but I also include something else that is based on things like the hippocampal implant, which might also defer the worst effects of degenerative disease such that while the patient still catastrophically dies in what is likely to be a shorter period, that period is one in which the patient receives the cutting edge enhancements for their intelligence that allow not only this experimental work to begin a robust development but may in fact receive the benefits of the patients enhanced intellect during that remaining possibly more rewarding interval.

I also suspect that these types of cerebral augmentation would improve the possibility of uploading for at least the persons' memories and experience producing a kind of life's experience data bank that not only could be kept as a significant part of a growing historical record but also as a potential repository toward future bio cybernetic advances that might allow such stored personalities to rejoined in a synthetic body for returning life.

I am afraid that we won't have the ability to bring on significant biotech as soon as Mind suggest for even regulatory reasons. The products that will be available in one to five years must already be in the pipeline to actually be made available in that period of time. In other words we have a very good idea of what will be available then, now.

I think the biocon perspective will crumble in the face of the potential for huge profits... the most important objective for life extension then is to demonstrate that potential. 


You are a quick study Osiris that was exactly the idea I am promoting. It is also why I don't think this can stay a toy of the idle rich once discovered as the potential profit dwarfs even that of the entire pharmaceutical potential today. It is such a monumental amount of wealth that not only is there more than enough to spread around there is too much for it to remain bottled up.


The real overlap that between these areas that is being overlooked is that computer language is only going to have to advance to cope with unraveling genetics and the synergy is that as genetics and computer language intertwine based and contribute to not only new methods of computing but new software models adequate to meet the potential of quantum computing then there will also be a parallel synergy developing between nanotech materials for quantum computing and the extension of molecular language programming for advancing nanotech.

Really there is a convergence of tech here that is far too important and powerful too ignore but the basic issue I am raising is how to make the correct pitch now to reap the greatest benefit overall and bootstrap all these areas of development together.

SENS needs to create breakthrough (introductory) products that capitalize a market creating an investment incentive. As money follows the ideas more people and interests will follow the money. If people see some practical applications of principle they will credit the larger ideas with far more potential. We are seeing this process at work in Nanotech already from a market development perspective. We are also seeing it in cybernetics with prosthetic eyes, limbs and other products achieving a field testing status.

What I am suggesting that we need to do is keep sight of the larger picture that most everyone else tends to lose sight of in favor of their individual focus. What we do really well here is see the manner of weaving these ideas together integrating, or *synthesizing* new and potentially unforeseen alternatives.

#24 signifier

  • Guest
  • 79 posts
  • 0

Posted 14 May 2005 - 09:42 PM

If *real* AI is developed, is it going to cuddle us for all existence? I see the implications that AI could lead to bigger and better things. However, I ponder the motivations of such an entity. Are we its highest priority?


"You have no care for your species. For thousands of years men dreamed of pacts with demons. Only now are such things possible."

William Gibson, Neuromancer

I wonder why AI gets such a bad rap. We've never even met one yet, and we assume that they all will be flawed with some irresolvable, philosophical problem. Either they will be evil, or useless, or apathetic to the human condition, or incapable of understanding some abstract part of human existence, of what it means to be human...

The problem is that we're consumed with ancient views of artificial intelligence. More than biotechnology, more than nanotech, the idea of AI fills us with an overwhelming sense of the unknown... And connected to that are images from popular culture: Skynet starting wars with humans in Terminator, programs hunting down humans in The Matrix, HAL refusing to open the pod bay doors.

Now, if *real* AI is developed, is it going to cuddle us for all existence? If we program it to; let's not confuse consciousness and the human experience with intelligence and the ability to solve problems. But that is too low a level of future shock. Why would it need to cuddle us? Why would it be smart and powerful, while human intelligence and ability remains stagnant?

#25 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 14 May 2005 - 11:15 PM

Thanks gang, lots of great discussion so far. Taking me a while to go through it all.

Let me start with comments by justinb and kraemahz - just a bit of advice actually. From my many years of experience in and around medical schools and biomedical research programs, I would not recommend anyone actually go to medical school if they want to make progress on the scientific front. Certainly if you are interested in directly providing care to patients, go to medical school, but if you are interested in doing research, go to graduate school. I say this because the majority of medical school programs do not teach people how to think critically and deeply about the causes of disease. Also find a graduate school that also encourages critical thinking - directed problem solving and not rote class work.

Also, nano is about ten years or more from being able to make any impact on biomedical engineering at all, there is still a great deal of basic research to be done before you will see clinical applications. So, choose wisely. If you were to go into bioengineering now, there would be almost nothing you could use nano for. If you went into nano, you would be doing a great deal of basic physics research.

Timing is everything ;)

Nano is certainly having an impact on increased computational power, but to get very far with highly complex circuits you will probably need better intelligent systems to design them. On many fronts we are running up against classes of problems which may exceed the ability of human minds alone to solve. This is ok though as we are already building systems which are aiding our ability to solve probelms in these classes.

#26 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 14 May 2005 - 11:55 PM

I find it encouraging that someone with an actual background in cognitive science thinks there's any chance in AI. I've never had the imagination to see such a chance. In my eyes there seem to be too many orders of magnitude in complexity missing. But I don't know what I'm talking about. Can you briefly outline your basic strategy for me?


My background is in neuroscience, not cognitive science [lol], and that is why I think there is a chance. We need to put the nail in the coffin of GOFAI (Good Old Fashioned Artificial Intelligence) which used strategies so far from reality that it was doomed from the start. Cogsci and GOFAI tend to be not very well grounded in "how things really work" in the brain. The whole paradigm was very behavioral in outlook and never really dared to "lift the hood" on intelligence.

The problem in neuroscience is that you have had this split between high level systems and scanning neuroscientists and low level single cell and molecular neuroscientists. Where the really interesting stuff that will allow us to get to a general understanding of human intelligence is in the middle range - in the network operations. These have only been studied in the very short term, but are yielding information quickly.

I owe a debt of gratitude to Jeff Hawkins new book "On Intelligence" which sent me back to my notes from a few years ago. The key ideas people have been ignoring is that even though the cortex processes lots of different types of information, it likely uses the same algorithm to process all types of information, and the same network structures in every part of the cortex. Futhermore it is incredibly plastic - it can accept any type of information and learn process it. The key person who developed these ideas is Vernon Mountcastle, who I was greatly interested a long time ago. I just re-read some of his key work, and he was remarkably prescient.

The point being that you may only have to understand how this cortical algorithm works to being to build simulated systems that can match the processing ability of mammals - you dont need to understand the cortex as a whole if you can build it up modularly.

From what I've learned about cell replacement I get the pretty clear impression that the goal-targeted perfection of this technology can get us near-indefinite life extension, if only substantial numbers of qualified people started wanting it. (With some aggregate removal -- maybe.) No human and no AI need to understand the complexity of the cell or the mechanisms of how we age in any substantially greater detail than we have today, in order to do it. All I can see it takes is extensive R+D into methods to get rid of wild-type cells, make replacement ones from the germ line and limited amounts of tissue engineering. This is not the creation of anything substantially new, but merely the adaptation of existing technology for each and every tissue of the body.


I agree with this up to a point. For much of the body this likely to be true - if you understand the correct environmental/genetic cues to influence cells to become what you want them to be, you could replace most human tissues. I also think is likely to be roughly straight forward. Where I do not think this will be true is in the brain though. While the cortex is extremely plastic, much of the rest of the brain is highly structurally invariant and gets that way only through highly complex development which is going to be extremely hard to recapitulate in an adult.

#27 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 15 May 2005 - 12:10 AM

Hi Nate

I acknowledge there are other areas of biotechnology whose obtained solutions are highly valued and not relevantly contended. Therefore, my blanket judgment “biotechnology is a waste” was a mistake.


I know where you are coming from Nate. After spending a lot of time in and around biotech I also came to the conclusion that a great deal of what goes on in biotech is absolutely worthless, (especially in commercial biotech) because there is very little critical thinking about what the research targets are. My hope is that the bioengineering culture will change this somewhat.

That aside, I still think that biotechnology doesn’t have the potential to solve as many problems falling under the class Human Condition as nanotechnology and AGI (both of which don’t really need biotech beyond neuroscience).


To go even more general - systems theory and complex systems studies, which I think are at the heart of AGI, could move all fields forward at a greatly accelerated rate. Understanding how information flows and how structures form in systems at all scales should be job #1. We run into problems we can't solve by modeling or abstraction - then we look to extracting data from real world systems we can look to as teaching examples.

#28 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,068 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 May 2005 - 12:23 AM

Laz, I should be more clear. I am saying biotech has the best chance of delivering some improvements in life-extension in the next 1 to 5 years. I am not saying that major breakthroughs will be achieved in this time frame. And then after 5 years hence, I think AI will become ever and ever more important.

#29 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 15 May 2005 - 12:23 AM

Hi Laz,

A short response to your long post. In general I think we need a crash program to model information flows in societies and cultures. Very little of this work has actually been done, but would be ridiculously profitable if it was undertaken. Of course I would want to use it for my purposes - to try and figure out how to nudge society in directions which I believe are more ethical, generate more human fulfillment, and increase the spread of consciousness. (The dark side of this - you can imagine how much governments and marketing agencies would be willing to pay for such information [huh]) I'm hoping that some of the modeling work I will do on complex neural systems can eventually be applied to problems such as these as well.

#30 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 15 May 2005 - 12:41 AM

Whew! You asked some doozies here Don [lol]

1)  Do you believe that in the medium term (<30 years) real AI can be designed which will be able to model complex biological systems more effectively than a human mind can at present?


Yes, initially I think we will use systems which can evolutionarily explore complex biological systems and which are constrained well by the parameter space of the biology. These need not be fully realized AI systems - there has been a significant amount of progress in the artificial life field in the last few years. You could fudge a little already say in some respects that these systems model the biology better than a human mind. In 30 years I am extremely optimistic.

2)  Do you really believe that IA will be feasible and utilized by systems engineers in the medium term (again <30 years)?


IA is feasible in the extreme short term, especially for systems engineers. There are so many technologies we just aren't using right now. A simple example - the understanding of highly complex neuronal information dynamics would be much easier to process for a human if it was placed in a three dimensional framework in which you could watch the information flows - this is possible using something along the lines of state-of-the-art video game engines. In general this requires a long reply though, so maybe this should be its own topic.


Do you believe that it will be necessary to achieve ENS (regardless of whether this is accomplished by the utilization of traditional human intelligence or AI/IA) before a transition to an alternative substrate can be engineered?

or

Do you believe that a tranfer over to an alternative substrate can be achieved prior to the accomplishment of ENS?

[Note:  The last proposition I am highly skeptical of]


I believe that you are not going to be able to tell the difference between alternative substrates and current biology as soon as nano comes on strong. Our biology and technology are already on a collision course and have been for some time. I hate to guess at dates, but somewhere around 20 years from now the line between artificial and original biological will begin to get quite fuzzy. In sum - these things are going to happen together - the best of the biology will become part of the artificial substrate.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users