[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 36 KB, 331x500, DERP.jpg [View same] [iqdb] [saucenao] [google]
1842744 No.1842744 [Reply] [Original]

Requesting opinions on this book and the man who wrote it from /sci/. Just curious to see what you guys think of his theory. I'm convinced he's right and I got a 136 IQ bitch! What do you have?

>> No.1842757

in before shitstorm.

I don't see the singulairity happen, but I like the other concepts layed out in the book. You don't have to take and anticipate everything that's writtin in it.

I just . . . keep it in the back of my mind, working towards it.

>> No.1842780

From the lack of responses, I'm just going to assume /sci/ is too retarded to have even heard of this book. I am disappointed in you, /sci/.

>> No.1842782

This is a troll thread.

>> No.1842799

>>1842780
>ask someone for their opinion
>call them stupid
>expect answer

>> No.1842801

>I'm convinced he's right and I got a 136 IQ bitch

dey see me trollin'

>> No.1842813

>OP is, contrary to what he's implying, fucking stupid.

>> No.1842818

There are several possibilities

1. The Singularity will happen and geek rapture will occur and we'll be tweaking the physical constants of the universe within a few centuries.

2. It will not happen any time soon because of overconfidence in the ability to create AI. The AI field has been plagued by overconfidence in the past, so it is not unreasonable to be very skeptical of ambitious AI claims that happen decades from now.

3. The singularity will happen but it won't be the ungodly paradigm changing event it is played out to be. There could, for example, be a fundamental limit to scientific progress, at least in the sense of always creating new wondrous technologies. And that limit could potentially happen not long after AI is achieved so the AI can't add much for us.

>> No.1842819

Come on guys, I was just messing around with the last line. Do you guys all lack a sense of humor? Seriously, I can't tell if you're trolling or not with the "he called me stupid response". Just kidding, you're trolling. Aren't you?

>> No.1842828

Fine I'll respond.

Ray kurzweil is a fucking genius, he's proven this many times. It's true that people don't correctly estimate the effect of exponential growth in technology, and unfortunately most people don't care. However the impact it will have in the next 10-30 years will be unprecedented and frankly, almost no one is qualified or even capable of making predictions of the implications of technology in that time period. We can, however, estimate the rate of technology increasing and give a very rough speculation of the applications of these technologies based off of the last 60 years or so. Many presume Kurzweil and other transhumanist to be overly optimistic of technology, and rightly so since the future has seemed to be 15 years away for the last 60 years. But in my opinion, the evidence they present and ways they have formed these logical conclusions and predictions of the future have pretty solid ground. Many of kurzweil's predictions have become reality despite much criticism.

tl;dr Listen to kurzweil because he's probably smarter than you

>> No.1842829

the singularity will occur because of the large hardon collider and we will all get sucked into a black hole and then ai wont be doin shit cos we be dead

>> No.1842839

>>1842818
I disagree with your points.

1. The Singularity will happen and geek rapture will occur and we'll be tweaking the physical constants of the universe within a few centuries.

It will not be within a few centuries. Rather, it will be within a few decades. Just look at the growth of computing today, and realise that the tools of the near-future will be one million times more powerful than they are today.

2. It will not happen any time soon because of overconfidence in the ability to create AI. The AI field has been plagued by overconfidence in the past, so it is not unreasonable to be very skeptical of ambitious AI claims that happen decades from now.

So just because something was true in the past, means it wil be true in the future. This is flawed logic. Like I said before about the growth of computing. When we have the required computing (which we will have in roughly 2029) to simulate the human brain neuron for neuron, I'm pretty sure we will have a good understanding of intelligence.

3. The singularity will happen but it won't be the ungodly paradigm changing event it is played out to be. There could, for example, be a fundamental limit to scientific progress, at least in the sense of always creating new wondrous technologies. And that limit could potentially happen not long after AI is achieved so the AI can't add much for us.

Possibly, but who is to say? Right now our technology is billions of times weaker than it will be in the future, and therefore, we have no idea what we are talking about. It is impossible to predict the new theories and ways around the fundamental limits of the universe that we will be discovering when we are a billion times smarter.

>> No.1842850

>>1842828
Indeed, my thoughts exactly. I applaud you for your intelligent contribution.

>> No.1842865

Bump

>> No.1842869

OP is an aspie

>> No.1842879

>>1842869
You are so cool. Not

>> No.1842880

he's a troll who gets money for "predicting" stuff, his jewish heritage also gets him lots of support and he got like 20 honorary doctorates from other jews who are directors at universities. Finally what he's saying is nothing controversial: "in the future technology will be more advanced than today" that's basically his whole argument, and it's pretty obvious that yea that's what's gonna happen. His other stuff is just crazy stories he invented and used to write a book to get money. I don't think a man who takes 250 different pills a day to "live longer" is a genius, far from it I think he's nuts. My personal view of "the singularity" is that it's not going to happen, singularities are by definition forbidden in the universe, and when something happens it means the rules have changed, there is no infinity or division by infinity.

>> No.1842892

>>1842879
This. Not.

>> No.1842897

>>1842880
Hooray! You just won the weekly retard prize for having an extra chromosome!

You stupid bitch. People still think the future he describes won't come for centuries, and he is proving them all wrong.

>> No.1842912

>>1842839

>So just because something was true in the past, means it wil be true in the future. This is flawed logic.

this is exactly the logic you employ when you extrapolate the rate of technological advancement to make those silly exponential growth graphs that i see on here all the time.

the truth is that nobody has any idea how technology will progress even over the next couple of decades, much less over the next century.

re: kurzweil (and i don't wanna go ad hominem here but i feel it deserves mentioning): the man is clearly insane. even if he is a gifted computer scientist or whatever (i actually don't know his background), he clearly knows nothing about biology whatsoever. regardless of his beliefs about computing power, the man makes a fortune peddling books full of biological misinformation, and that is morally reprehensible and antithetical to science. in addition, i find his claim that we will "completely simulate the human brain in 20 years or so" to be laughable.

just my two cents. frankly i'm tired of this thread every day on /sci/, if you cannot tell.

>> No.1842931

>>1842912
see
>>1842828

>> No.1842945

The singularity is nigh, friends. Nanochip technology+increasing AI intelligence will lead us to the future... Unless the public fucks it up and religion is distraught about it... Then it's never gonna happen, because people are afraid of change.

>> No.1842978

>>1842912
Your cynicism pains me. He only went to MIT, runs companies worth hundreds of millions of dollars, won the highest award for science and technology, and yet you claim he is "clearly insane" and you find his claims to be "laughable". Honestly, what are your credentials? I won't go full-on ad hominem because you didn't, but come on. Don't you think his claims merit an ounce of merit? Or are credentials just not an important factor for you when you create your own personal beliefs? Also, the difference between your logic and his, is that he has proved his extrapolations and predictions to be correct for literally decades. Just look at his track record, it's above and beyond what any critic of his can claim to have.

>> No.1842997

>>1842931

that's the thing though, i don't necessarily agree. he stands up on stage and says "in 20 years we will have reverse engineered the human brain" as if that is something that is obvious. i for one do not accept that looking at a graph and concluding this is "sound" logic. frankly i don't see any evidence in any relevant fields that this will be accomplished in anything approaching that timeframe.

and i do not see how this addressed my point that kurzweil is currently on several online lists of medical quacks and scam products. the life extension products and books that he sells are nothing short of fraud, and i detest him for that.

do not dismiss this. OP said "talk about kurzweil, what do you think about him?" i'm not going to stand back and say "well of course he is a genius" when he makes a small fortune doing shit like this.

and by the way, for the record, his most sensationalist predictions have actually not come true.

>> No.1843006

>>1842839

> When we have the required computing (which we will have in roughly 2029) to simulate the human brain neuron for neuron, I'm pretty sure we will have a good understanding of intelligence.

I don't want to get into a debate over whether we can or will develop a sufficient understanding of human intelligence to produce machines that exhibit human-like intelligence, but I will point out that simply simulating a human mind in a machine does not imply that we understand anything about intelligence or that the simulated mind will understand its own intelligence enough to modify itself to become smarter. Knowing the placement and connections between every neuron in a human brain doesn't mean we understanding the underlying logic behind how intelligence emerges from such a configuration. So just being able to simulate a human brain in a computer does not really put us any closer to the singularity.

> Right now our technology is billions of times weaker than it will be in the future, and therefore, we have no idea what we are talking about.

I'm not sure I follow. I suggested the possibility that there may be a fundamental limit to human technological accomplishment and that limit could happen sooner rather than later(due to the very same past exponential technological growth that Singularitarians so often refer to). You then say that technology of the future will be billions of times more advanced than now(thus implicitly rejecting the suggestion) then using that to somehow falsify what I suggested?

>> No.1843013

>>1842839

> ways around the fundamental limits

Here's the thing, bro: Nature does not care whether we want a "way around". All that matters is how Nature actually behaves and the universe was not made to give humans infinite possibilities for the future.

It is entirely logical to state that FTL travel will never be possible. Maybe it will, but virtually all evidence now says otherwise. To suggest that there just *has* to be a way around it is human-centric. Similarly, there are problems with picoscale(and some nanoscale) engineering that arise from quantum mechanics. Those problems may or may not ever be reconciled. All we can say at this point is that there are clear barriers. Anything else is an act of faith.

Such "acts of faith" are very common in singularitarianism. There is a certain apocalyptic strain to it that would be recognizable to many varieties of marxists and christians. In general, it is unwise to view history as a grand path leading to some "inevitable" moment in the future.

>> No.1843023

>>1842978

i never said that he hasn't or will not make valuable contributions to his field. his book's central claim is not just "technology will grow and it will be cool," he literally crafts a whole storyline involving neo-luddites and giant space robots flying around zapping up planets. even if the extrapolations are reasonable (which is debatable), this is not. i don't care how many degrees he has, that is not science.

>> No.1843059

I've read the book and find it to be a bit of a mindjob.

It's incredibly thorough for a start. Whereas anybody else would just freely extrapolate from ideas, almost every single fucking sentence ends in a citation, with a 100 page section of references at the back - that's at least 1/5th the entire book's length. It's incredibly detailed and thorough in its research.

It's one of the best books I've read but it is pretty exhausting trying to take in all that information. It's not a book for entertainment. It's a roadmap of the future, and makes no compromises in readability or digestability in communicating the arguments.

Given Kurzweil's track record for predicting the events of the last 15 years (something like over 100 predictions and only 3-4 were wrong), this book is certainly very exciting as it is based around a much more up-to-date and intelligent analysis of current information trends.

I thoroughly recommend it.

>> No.1843074

OP, I really enjoyed the book.

It seems a trivial and obvious thing but I was really impressed with how he explained the exponential rate at which human progress has been increasing for the past 50 yeras.

>> No.1843086

My predictions:

- It will take longer (Maybe over a century) to create posthuman AI (Kurzweil's books go out of print, but Roger Penrose is still there laughing at us)
- Increasing the intelligence of humans will be easier
- Mind uploading will be much harder than anticipated and early version will destroy the person but once it's achievable human augmentation on unprecedented scales will become possible
- There will be a Singularity in this century, not of AI changing everything, but simply accelerating progress put in overdrive by the Nanotech Revolution that will also take place between 2030 and 2050. Or earlier.

I buy it.

>> No.1843085

>http://www.rayandterry.com/index.asp

>Ray Kurzweil
>Genius
>Operates website selling nutritional supplements with Dr. Terry Grossman
>Dr. Terry Grossman is a homeopath
>Ray associates himself professionally with quacks

WHATAFUCKINGSURPRISE.exe

>> No.1843084

My belief: Humans have allows been looking to extend their reach..yes, that's almost exactly what Ray K. has said, but I truly think that belief will propel us much farther than we can ever imagine. Look at it this way: many years ago we were living in caves and hunting animals with crude tools..put an ancient man into today's society and see what he thinks about our world. We're looking at the future right now from our own current understanding..a linear perspective. The funny thing about exponential growth is that initially, it doesn't seem like much progress but after 20-25 turns of the screw, you really begin to notice major changes, and BOOM!! It's like a snowball effect..continually building upon itself.

>> No.1843089

>>1843006
"I'm not sure I follow. I suggested the possibility that there may be a fundamental limit to human technological accomplishment and that limit could happen sooner rather than later(due to the very same past exponential technological growth that Singularitarians so often refer to). You then say that technology of the future will be billions of times more advanced than now(thus implicitly rejecting the suggestion) then using that to somehow falsify what I suggested?"

What kind of limits are you even implying to be "sooner" rather than "later"? Are we talking within a few years from now or what? The growth of computing for the next decade has already been mapped out by Intel, and they
claim that any limits imposed upon computing progress by single chip layers would be overcome by three-dimensional chips that will predominate 2-D chips by 2017. 2017 is long before the predicted date in which we will reach the limit of 2-D computing power (sometime in the 2020's). The next 2 decades appear to be pretty solid (and without limits) as far as knowing where we are going with computing. How "soon" are we talking about?

>> No.1843095

>>1842744

I think he is mostly right. He is an optimist for sure. I think he is moderately off in his future estimates. He is banking on that we have the foresight on what to do. We still don't have an expensive knowledge of how even brain waves works.

http://www.physorg.com/news192881674.html

We simply don't know all the elements of the brain. So if we try to extrapolate when we will be able to simulate it will be off.

He also seems to be banking on shortcuts which I think will not pan out. He says that the brain is full of redundancies so we will be able have computing short cuts when we simulate the human brain. It seems to me that research tends to have to do things the hard way first before utilizing shortcuts because what may seem redundant may be important and there are other reasons why we have to do it the hard way.

I also think he has a low understanding of economics. Yes a bad economy doesn't profoundly effect future technology capability but it does affect technology adoption and distribution. He also advocates some funny regulation and government funding of projects. If you don't understand why government shouldn't fund science watch this.

http://www.youtube.com/watch?v=TnsLFVGOU4Y

>> No.1843101

>>1843086
>over a century to create posthuman AI
>There will be a Singularity in this century

wat

>> No.1843105

>>1843101

>not of AI changing everything, but simply accelerating progress put in overdrive by the Nanotech Revolution that will also take place between 2030 and 2050. Or earlier.

>> No.1843199 [DELETED] 

A real transhumanist AI will fuck shit up for sure. It will basically make humanity either dead or irrelevant real quick. (Read and shit brix: http://lesswrong.com/lw/qk/that_alien_message/).).

Will a strong AI ever be possible? Maybe yes, maybe it's principally impossible. Right now, computer scientists just hide their complete cluelessness by drawing exponential graphs.

>> No.1843262

I am smart enough to make accurate predictions about extremely complicated subjects.

>> No.1843266

>>1843086
Sounds legit to me.

>> No.1843292

mathematician's take on this:
2^x does not have a singularity, anywhere.
so no singularity. just because people want to see a -1/(x-year) curve, doesn't mean its there.

>> No.1843366

>>1843292
heh