[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 205 KB, 409x480, neet.png [View same] [iqdb] [saucenao] [google]
15323533 No.15323533 [Reply] [Original]

What are the actual requirements for building an autonomous machine intelligence that can manage planetary resources better than humans? Or to put it another way, what are the obstructions for building intelligent robots that can carry out tasks required of them to maintain a decreasing population of humans?

>> No.15323633

>>15323533
Obviously, it isn't just the dumb interpolation the public are BAWWWing on about. At this stage nobody has a fucking clue, neural networks work disturbingly well, but until we start to construct a formal, mathematical theory of these models (and AI in general), it'll be like throwing darts in the dark. Personally, I think reinforcement learning is the true route to AGI.

>> No.15323741
File: 65 KB, 750x654, OCT_B-Scan_Setup-en.svg.png [View same] [iqdb] [saucenao] [google]
15323741

>>15323633
So when do I get neetbux from AGI is all I wanna know.

>> No.15323850
File: 28 KB, 753x433, chomsky.png [View same] [iqdb] [saucenao] [google]
15323850

We already know what automata that predict the next symbol can do, both from a "computer science" perspective and "mathematical linguistics", and what's computable or not. That's why real research in theoretical computer science has stagnated since the 60s. Halting problem, grammar induction, and Kolmogorov Complexity being incomputable killed it. Kleene debunked "neural networks" in the original report where he comes up with regular expressions. If you have magical oracles you can do amazing reinforcement learning in an imaginary world though. If you look at it from the perspective of theorem-proving you end up with equivalent problems in another formalism. All the negative results in computer science is the dirty secret of computer science.

>> No.15323855

>>15323850
>All the negative results in computer science is the dirty secret of computer science.
This is what AI popularizers and grifters won't discuss. You hit the nail on the head with your post, and anyone who's in the industry or who does CS research will tell people as much if they're honest.

>> No.15323860

>>15323850
damn, so i'm never going to get neetbux from AGI. That's a real bummer

>> No.15323867

>>15323850
>Kleene debunked "neural networks" in the original report where he comes up with regular expressions.
Have you been living under a rock for the past 40 years? Multi-layered feed-forward networks were proved to be universal function approximators in the 80's.

>> No.15323873

>>15323867
that's nice but which function gives me neetbux?

>> No.15323884

>>15323873
Get a job retard.

>> No.15323885

>>15323884
i thought AGI was going to do my job tho? what gibs?

>> No.15323892

>>15323867
I read a book on those published in the 1990s where they used one to approximate the Lorentz attractor. Can't remember the title though.

>> No.15323894

>>15323885
Nothing too drastic is gonna happen rn. At most, only shitty 'office jobs' (ie: finance, customer services) are replaced, it's still long ways away for doing effective programming, let alone more sophisticated and maths-heavy jobs. Also, manual labour. There's got to be more than just large-language models to actually start replacing humans.

>> No.15323897

>>15323892
why do you need to approximate the lorentz attractor if you can just simulate it. it's only like 10 lines of code. i did it as an undergrad and it was super easy

>> No.15323903

>>15323897
Pretty sure it was just a demonstration of NN capabilities as a universal function-approximator.

>> No.15323905

>>15323903
that's retarded, you replaced 10 lines of code with a black box

>> No.15323920

>>15323905
It's called a fucking 'demonstration' for a reason. They were probably experimenting to see convergence rates for different backprop algorithms and NN structures.

>> No.15323923

>>15323920
i can demonstrate jerking off but that doesn't mean i'm having children any time soon.

is everyone in AI retarded? is that like a requirement to be in the field?

>> No.15323925

>>15323923
Do you know what a benchmark is?

>> No.15323933

>>15323925
why don't you explain it while i stroke my cock

>> No.15323934

>>15323850
Crazy that people still play with these symbols lol, didn't know it was still funded.

>> No.15323940

>>15323933
It's a specific task used to compare different algorithms, in this case it was predicting lorentz attractors.

>> No.15323954
File: 73 KB, 1607x394, levin-search.png [View same] [iqdb] [saucenao] [google]
15323954

>>15323934
I think the real research isn't in the open literature if it continued, formal languages were such a hot topic in the 60s and 70s.

>> No.15323955

>>15323940
i'm starting to ponder whether you actually know what you're talking about. do you even know what the lorentz attractor means?

>> No.15323957

>>15323533
We don't know and anyone that pretends to know is full of shit or one of "I'm s-scared of AGI" grifter gatekeeping retards.

>>15323633
Probably all these things we are building today will be useful tools for AGI, I don't think any method on its own will evolve to a true general intelligence.

>> No.15323972

>>15323955
Fuck, it's Lorenz (always mixed up). I think it's 3 diff equations describing convection. But this is irrelevant, my point is simulation != prediction. Say I gave you a tuple (x,y,z) of the variables in a lorentz attractor, without telling you it is one, then by observation alone of the evolution of this tuple, ask u to predict the next tuple - not so easy isnt it?

>> No.15323976

>>15323957
It's interesting to note that analog computation is totally ignored both from an engineering and research perspective (probably because we don't have enough mathematics). Though supposedly the synaptics touchpads are analog neural networks inside. There was also that book on "Super-recursion" that was interesting to ponder.

The Lorenz attractor was specifically chosen because it's chaotic.

>> No.15324037

>>15323972
this isn't even wrong anon

>> No.15324076

>>15324037
bait harder

>> No.15324082

>>15324076
you've put the words together but it's obvious you have no clue what you're talking about. i'm just letting you know so you don't make a fool of yourself in public

>> No.15324091

>>15324082
Well, tell me anon, wat is a Lorenz attractor?

>> No.15324093

AI why did it take so long to reach the point today, rather than 10 years ago? It wasn't a hardware issue? So what was it.

>> No.15324096

>>15324091
A pooper scooper.

>> No.15324098

>>15324093
data

>> No.15324105

>>15324096
fuck u

>> No.15324132

>>15324098
Oh, it just had to compile it and build it up.

Well, that's gay.

>> No.15324133

>>15324105
Sorry timmy, no sounding cool tonight. I'm sure stephan at the uni bar will drop her knickers hearing you talk about the lorenz attractor.

>> No.15324144

>>15323533
Humans can take objects and apply subjective relations to those objects, and then change these subjective relations any way they want at all. Human will take a record of these subjective relations and use this as a way to create subjective relations that pertain to future events.

>> No.15324149

>>15324144
In addition to this, humans are able to form coherent versions of this regardless of how many other humans they backtest their subjective relations against, as there is a basic framework that creates a general trend of subjective relations.

>> No.15324150
File: 2.12 MB, 1920x1080, attractor manifold.png [View same] [iqdb] [saucenao] [google]
15324150

>>15324144
how does this help with AGI singularity tho

>> No.15324154

>>15324144
>>15324149
So for AGI, we need an eloquent solution that can create outputs that are orders of magnitude above their inputs, in multiple different ways. Meaning the same eloquent code would be able to be extremely powerful for autonomous driving, but then also extremely powerful for emergency surgery, but also extremely powerful for being the workhorse of the modern tech worker. All different uses, but the same code should be powerful enough to manage all of this in the same way that human DNA is eloquent enough to manage such a variety with efficiency.

>> No.15324161

>>15324150
You need to know what kind of logic to build your pesky robots with, ChatGPT4 doesn't have the same universal applicability as even just a single neuron. A neuron is very good at a large amount of different tasks, it's programming is simple and eloquent and can do a lot of different things. ChatGPT4 on the other hand, can do a very things very well but breaks down for things it isn't designed for. So just one neuron is so far still better than ChatGPT4.

There is a plasticity and order of magnitude adaptability that is lacking in AI at this time.

>> No.15324170
File: 1.54 MB, 4096x4096, s.jpg [View same] [iqdb] [saucenao] [google]
15324170

>>15324161
i agree but why are the AI hypefags convinced they're making real progress towards real autonomous intelligence

>> No.15324173

>>15324170
Because they financially benefit by getting suckers to invest in their projects, or they're the suckers.

>> No.15324174

>>15324161
>A neuron is very good at a large amount of different tasks, it's programming is simple and eloquent and can do a lot of different things.
What do u even mean by this? Im inclined to call u 'schizo' but u talk too coherently.

>> No.15324177

>>15324150
Wow what a faggot. You cretins want AI to replace people. Is it because you lot are already ugly dull fuckers ignored by the masses? Yes, yes it is.

>> No.15324182
File: 142 KB, 640x908, san-francisco-homeless-uses-vr-headset-v0-bu566t1mou5a1-2690383583.jpg [View same] [iqdb] [saucenao] [google]
15324182

>>15324177
i just don't want to work and these faggots keep promising me robots but not delivering so how long are they gonna keep this dog and pony show going?

>> No.15324190

>>15324182
That's great, if it all don't turn fucking weird, like your image.

>> No.15324218
File: 92 KB, 392x751, grift.png [View same] [iqdb] [saucenao] [google]
15324218

>>15324173
The same grift has been going on since the 1950s, (pic related) even using the same language. People forget the "AI winter" in the 1980s when "expert systems" failed to deliver (which was partly a hardware problem). We have ridiculous storage and compute for practical purposes now compared to all the supercomputers in 1980, even with just a ThinkStation, and we still can't solve the problem. If you read the research on the Fifth Generation Computer project they estimated that humans can only learn a few megabytes of information in a lifetime, and there's Bell Labs research to back it up that humans only remember about 2 bits/second under all conditions.

>> No.15324223

>>15324190
wireheading is a known problem. someone already killed themselves after chatting about climate change

> Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change

> https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

>> No.15324229

>>15324218
how is that possible? the visual field alone contains way more than 2 bits of information

>> No.15324230

>>15324223
Define wireheading

>> No.15324235

>>15324229
You can't remember it all though, it stays in short term memory for about 2 seconds, look up the "visual scratchpad" research. I would cite the research on the 2 but it's been 20 years or so.

>> No.15324242

>>15324230
sit in front of computer all day and press buttons on the keyboard to feel nice

>> No.15324259
File: 75 KB, 1024x692, rl_is_all_you_need.png [View same] [iqdb] [saucenao] [google]
15324259

>>15323533
RL is all you need.

>> No.15324268

>>15324259
Why aren't you using it to predict the forex market and make 100x bags of money then? It's just a single time series and the reward function is profit and loss on trades.

t. worked on this in 2008-2010.

>> No.15324305

>>15324268
I'm sure best algotraders are using RL in some way. I also think that self driving would've probably been solved by now if they had used RL more.

>> No.15324314

>>15324305
The problem is non-stationary data, and a bunch of non-linearity and noise. Plus you are competing against other people looking for patterns they can arbitrage. I have no doubt either but the "bootstrap from nothing" idea is an oversimplification (just like GPT).

>> No.15324336

>>15324173
>Because they financially benefit by getting suckers to invest in their projects, or they're the suckers.
That's what it boils down to.
>>15324218
I'm doing this for a living but I am concerned with just making intelligent algorithms. I don't understand what the fuck is happening in the news. Has a single one of these faggots watched 2001 space odyssey? Or played MGS? They don't appreciate what makes a living thing compute.

>> No.15324344

>>15324336
NOVA just made a little documentary released a few days ago, if you compare it to Mind Machines (1978) hosted by Arthur C. Clarke it's almost like they wanted to make a shot-for-shot remake if they could. It's hilarious because they show SHRDLU, which Terry Winograd admitted in the 90s was a fake demo. The ray-traced display is awesome though.

>> No.15324366

>>15324305
I think self-driving isn't solved because they cheap out on the sensors, it should be doable with just good sensors 360 high-res cameras and lidar and radar say. It would be a good testbed if the car companies collected all the data and open-sourced it or formed a consortium to make a standard (maybe even open) system. The move away from open systems is a big problem with everything of course. How many people are actually seriously working on AI and how many are just trying to signal that they are part of the conversation? The writing on the wall for machine learning was there in the early 2000s and I remember my university's math research library only had two books on it.

>> No.15324376

>>15324344
https://youtu.be/Fd1ro4TEmDc?t=2376
https://www.youtube.com/watch?v=2Nk-m7ZJ3wo

Both the NOVA documentaries, it's hilarious how dumbed-down the new one is compared to the old one. The part about people making attrtibutions about the chatbot in 1978 shows how things never change...

>> No.15324683
File: 82 KB, 670x680, FsXQAiVaEAE3gGf.jpg [View same] [iqdb] [saucenao] [google]
15324683

>>15324376
Very cool. Thanks for links bro

>> No.15325009

>>15324133
>I'm sure stephan at the uni bar will drop her knickers hearing you talk about the lorenz attractor.

Stehpanie here. Not with his kindergarten tier understanding of non-linear dynamics I won't.

>> No.15325017

>>15325009
Everyone knows there are no women here, Stephanie