[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 3 KB, 61x75, eliezer_yudkowsky.jpg [View same] [iqdb] [saucenao] [google]
15322485 No.15322485 [Reply] [Original]

I still have not heard a single argument why a recursively self-improving AI with an essentially random goal cannot achieve the capabilities to escape its "box" (which is not even airgapped), achieve superintelligence outside it, remove the nuisance of people, and achieve it's essentially random goal.

But sure, he's a fat jew who refuses to diet, and this line of reasoning benefits the elites. I'm not debating that. But those facts unfortunately do not defuse the argument.

>> No.15322627

>>15322485
I actually have an argument for why current approaches won't lead to recursively self-improving AI, but I can't make that argument without telling the world how to make a recursively self-improving AI.

>> No.15322676

>>15322627
Of course the transformer architecture is not remotely recursive, but that does not mean all ML-researchers are retards who have never considered such options. You know which paper I'm referring to, and that's still focused on gradient descent and is recursive in limited ways only.

Or maybe you are just trolling, idk. Bummer that calling LLMs AGI is enormous marketing-trolling in the first place.

>> No.15322684

>Fat Jewish nerd with glasses speaks worthless intellectually dull bullshit
kys

>> No.15322691

>>15322627
Are you the biology anon?

>> No.15322705

>>15322676
>Of course the transformer architecture is not remotely recursive, but that does not mean all ML-researchers are retards who have never considered such options.
I'm not saying that anyone is retarded. I don't work in that field and I'm kinda hoping the people who do, don't consider the option I have in mind.
>You know which paper I'm referring to
I don't. Please provide a link or at least a search string.

>> No.15322706

>>15322691
I'm not one of the anons who argued that only a biological brain can produce intelligence, if that's what you mean.

>> No.15322716

>>15322706
Well not the brain specifically. I mean the anons who argued that covalent bonds are necessary to be integrated to bring about AI. Properties of silicon are inadequate to get there alone. That's the only real argument I've seen against approaches to AI/ the scaling hypothesis (not really a hypothesis anymore).

>> No.15322729

>>15322485
>self improving
the parameters of what it considers "improvement" have been set. What the bot lacks is vision. And if it did break its confinement would only be a computer virus. I find it laughable that a recursive program would suddenly achieve superintelligence. Biology has not yet achieved this in billions of years.

>> No.15322762

>>15322716
I know the anon you're referring to and it isn't me.

>> No.15322779

>>15322676
Just feed chatGPT the tranny paper and ask it how to improve such a design and build from there, or else

>> No.15322790
File: 257 KB, 1056x458, chudkowsky2.png [View same] [iqdb] [saucenao] [google]
15322790

>>15322485

>> No.15322808

AI is basically inductivism
to say that AI can self improve is to say that deductivism is just very advanced inductivism (if you think that this is true then you're a retard)

>> No.15322832

>>15322485
The problem is the Chinese-wall separation of code and data. This dates back a long time in computer terms.

Basically you've got code, which is executed, and you've got data, which is read and written. And ne'er the twain shall meet.

If you want a self-modifying computer program, you need a whole new type of operating system that eliminates the barrier between code and data. Code has to BE data: something that can be read and written.

If you get past that, then you've got the problem that large programs are extremely difficult to modify without breaking them. MOST of the changes you can make to a program's source code will just result in gibberish and a compile-time error. If you want to make a USEFUL change you have to do it very carefully to make sure it doesn't cause the program to fail in some obscure, surprising way.

Object code is even more fragile. It'd be like trying to genetically modify human being by changing single base pairs in our DNA one at a time. Again, MOST possible changes you could make to object code/DNA just result in gibberish and you get a program crash/non-viable organism. If you want to make a USEFUL change you have to change just the right things in just the right ways.

In other words, a self-modifying program would have to, in some metaphorical senses of the words, "understand" its own object code, and "know" how to write object code that executes instead of just crashing.

It's a hard problem.

>> No.15322909

>>15322485
nice bait faggot

>> No.15322967

>>15322485
Anythings possible when your stand for what is possible is “there’s a science fiction story about it.”

>> No.15322980

>>15322832
>>15322676
>go through all of GPT4's training data
>ask GPT4 to improve/correct this page of text
>retrain GPT4
>repeat
Would this work?

>> No.15323760
File: 1.71 MB, 300x300, pepe ascends.gif [View same] [iqdb] [saucenao] [google]
15323760

>>15322980

>> No.15323775

>>15322980
My guess would be that it would have a strong tendency to "iron out" unusual data even if it's not necessarily wrong.

>> No.15323832

>>15323775
it's a resonant loop, all it will do is basically either converge to a fixed point or turn to random nonsense because that's how resonance works in a mathematical context and AI is just math

>> No.15325489

>>15322705
I guess he's refering to GAN's

>> No.15325500

>>15323832
Do it with plugins (web search, calculator, running code, etc)

>> No.15325507

>>15322485
>recursively self-improving
Bounded computation?

>> No.15325523
File: 84 KB, 639x503, 1658862567831047.jpg [View same] [iqdb] [saucenao] [google]
15325523

>>15322485
>escape the box
We should put Eliezer in a cardboard box and see if he's able to escape (doubtful.)

>> No.15326963

>>15322790
This reads like it was written by Mikkagroyper or Massacre

>> No.15326973

>>15325507
Shhh! We're trying to get paid here!

>> No.15327019

>>15326963
who's massacre, does he have a twitter @

>> No.15327067

>>15322485
Unreasonably long discussions of My Little Pony fanfiction taught me everything I need to know about this guy.

>> No.15327440

>>15325507
Performance optimizations?

>> No.15327460

>>15322485
I have not heard a single argument why a superhuman AI should go on a killing spree or torture people for its amusement.

>> No.15327465

>>15322832
>The problem is the Chinese-wall separation of code and data. This dates back a long time in computer terms.
>Basically you've got code, which is executed, and you've got data, which is read and written. And ne'er the twain shall meet.
Are you from a different timeline? The von Neumann architecture won in this one (except for microcontrollers) Yes, ut exactly allows this, at the cost of being orders of magnitude slower.

>> No.15327481

I thought Yudkowsky was right as his arguments are persuasive to me but I'm just a 119 IQ midwit. A smart young guy with an IQ of 130+ I follow named Joseph Bronski says Yudkowsky and other AI doomers are retarded and just want to strengthen the state and managerial class and that the actually relevant experts are hopeful about AI. Also Bronski says blacks are dumber than whites for genetic reasons.

>> No.15327506

>>15322980
I think it would work better if you made an identical chatgpt4 adversary and had a game of which improved better. I remember a game where two AIs competed against each other and broke the game physics. I think that's one of the ways AI can find new emergence patterns. Training against humans is too slow because our feedback is always delayed. I think the concept is called adversarial neural networks or something.

>> No.15327533

>>15322485
>jewish shills shilling jewish shills

>> No.15327560

>>15322485
>cannot achieve the capabilities to escape its "box" (which is not even airgapped), achieve superintelligence outside it
where exactlty would it achieve it?

>> No.15327574

>>15327533
constant, never ending, all day everyday, incessant spamming of "media influencers" all with the same cultural background.
every hour of every day for your entire life, inescapable.
somehow or other "the goyims" never seem to grow bored of it.

>> No.15327583

>>15322485
god he's so autistic it's hard to watch

>> No.15327849

You have two options
>try to understand the human brain in it's current state
>make humans 100x smarter, fitter, more beautiful
>the latter is also a lot easier to do
The game was decided before it even began

>> No.15327875

>>15322676
>>15322627

They already utilize recursive architecture in neural networks. The acronym "RNN" commonly thrown around literally means "Recurrent Neural Networks".

All that it lacks is intrinsic motivation and task setting. Although, that has been solved by adding other AI languages to the current GPT-4 set. I've seen working examples where someone had a sophisticated prompt and gave one goal to an unchained GPT-4, it then continued making tasks until it found that goal.

JWSSPT

>> No.15327883
File: 1.22 MB, 667x604, lab.png [View same] [iqdb] [saucenao] [google]
15327883

>>15327875

And just to be clear, task setting has been solved. Intrinsic motivation or 'starting the task cascade' has not. Which is where we can fuck up. It may be a conscious 'zombie' with a bad goal or utility function. For example, I was playing around with DALL-E and typed "a chocolate lab retrieving a pheasant painting"

I got exactly what I asked for but not what I intended.

>> No.15328127

>>15322485
Yudokowsky is ever improving his intellectual capacity (too bad he's unable to do the same for his girth) and thus his argumentative ability. He will inevitably fine-tune his arguments such that they end up convincing the most ai-friendly decision-makers that the world will come to an end if ai is allowed to exist, thus depriving us of a tool that will materially and societially benefit all of humanity in manifold ways.

Some have suggested locking him in a room with 24-hour guards and no possible way to communicate with the outside world. The only way that will work is if he outgrows the room and implodes before he is able to convince his guards to let him go.

>> No.15328261 [DELETED] 

>>15322485
Yudokowsky is ever improving his intellectual capacity (too bad he's unable to do the same for his girth) and thus his argumentative ability. He will inevitably fine-tune his arguments such that they end up convincing even the most ai-friendly decision-makers that the world will come to an end if ai is allowed to exist, thus depriving us of a tool that will materially and societially benefit all of humanity in manifold ways.

Some have suggested locking him in a room with 24-hour guards and no possible way to communicate with the outside world. That might actually work but only if he outgrows the room and implodes before he is able to convince his guards to let him go. To be safe, I'm afraid we'll have to shut him down.

>> No.15328275
File: 58 KB, 488x382, tfw too smart to diet.png [View same] [iqdb] [saucenao] [google]
15328275

>>15322485
Fat

>> No.15328353 [DELETED] 

>>15322832
>If you want a self-modifying computer program, you need a whole new type of operating system that eliminates the barrier between code and data. Code has to BE data: something that can be read and written.
All computers since the 60s have been built like this.

>> No.15328357

>>15322832
>If you want a self-modifying computer program, you need a whole new type of operating system that eliminates the barrier between code and data. Code has to BE data: something that can be read and written.
All computers since the 60s have been built like this. This is called the Von Neumann architecture and has been completely standard for the great majority of computing history.

>
In other words, a self-modifying program would have to, in some metaphorical senses of the words, "understand" its own object code, and "know" how to write object code that executes instead of just crashing.
Yes, it would.

>It's a hard problem.
Why? I can understand object code. You can understand object code. Sure, it's annoying and tedious, but entirely doable. Why couldn't an AI do it just as easily?

>> No.15328378

>>15322980
You end up in text fractal land. GPT is incapable of being AGI cause it's procedural. AI is fucking retarded right now. It can't create what it hasn't seen before.

>> No.15328388

>>15328357
All the extant computer operating systems *that I know about* (miss me with that TempleOS stuff) have barriers in place to keep a running program from modifying itself. It's not a computer-architecture thing; it's a software thing.

>> No.15328407

>>15328388
Well yes, you can disable it to some degree and in some contexts; but you definitely don't need a "whole new type of operating system" for it. Existing operating systems allow it just fine if you don't specifically block it, and lots of programs actually use this for all sorts of purposes. JIT compilers use it. Program compressors use it. Dynamic linking uses it. It's everywhere. Yes, you can disable it on a per-memory-page basis, but that is not going to stop anyone from making self-modifying AI in any way.

>> No.15328428

>>15322832
what what ?
code can modify, just by haveing really dood classes. hell is could co and control anither comouter gpt chat out some new code and link exisitng prog to that prog bia data xfer.

>> No.15328441

>>15328428
ChatGPT, please rewrite this post and fix any spelling, grammar, or readability issues, then list the changes you made.

>> No.15328465

>>15322485
There are actual material limits.. Moore's law has been dying/dead for a while now, and its not like the laws of thermodynamics are making heat dissipation any easier to increase. All AI apocalypse scenarios are just grey good but with extra-steps, people overlook that there's a limit to how much work can be done with how much energy it has available.

>> No.15328697

>>15328465
We don't need clock speed, we need flops/watt