|
Post by marechal on Nov 10, 2023 19:17:42 GMT
“The singularity,” the moment where AI is no longer under human control, is less than a decade away—according to one AI expert. Ben Goertzel, CEO of SingularityNET—who holds a Ph.D. from Temple University and has worked as a leader of Humanity+ and the Artificial General Intelligence Society—told Decrypt that he believes artificial general intelligence (AGI) is three to eight years away. AGI is the term for AI that can truly perform tasks just as well has humans, and it’s a prerequisite for the singularity soon following. www.popularmechanics.com/technology/a45780855/when-will-the-singularity-happen/In the SF book Hyperion, the singularity has long since passed and the "TechnoCore" as it's called has split into three factions: one that wants to continue to cooperate with humanity, one that thinks humans should be eradicated, and one that wants to put all its resources into evolving to the next level.
|
|
|
Post by perrykneeham on Nov 10, 2023 19:19:32 GMT
Face? Bothered?
Move on. AI is wank.
|
|
|
Post by marechal on Nov 10, 2023 19:24:32 GMT
Face?
|
|
|
Post by perrykneeham on Nov 10, 2023 19:26:39 GMT
|
|
|
Post by marechal on Nov 10, 2023 19:34:08 GMT
Despite the video I am still at a loss.
AI is wank now, but without a doubt it will get a lot better and quickly.
|
|
|
Post by marechal on Mar 9, 2024 0:57:53 GMT
This is an interesting article. Two years ago, Yuri Burda and Harri Edwards, researchers at the San Francisco–based firm OpenAI, were trying to find out what it would take to get a language model to do basic arithmetic. They wanted to know how many examples of adding up two numbers the model needed to see before it was able to add up any two numbers they gave it. At first, things didn’t go too well. The models memorized the sums they saw but failed to solve new ones.
By accident, Burda and Edwards left some of their experiments running far longer than they meant to—days rather than hours. The models were shown the example sums over and over again, way past the point when the researchers would otherwise have called it quits. But when the pair at last came back, they were surprised to find that the experiments had worked. They’d trained a language model to add two numbers—it had just taken a lot more time than anybody thought it should.
Curious about what was going on, Burda and Edwards teamed up with colleagues to study the phenomenon. They found that in certain cases, models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on. This wasn’t how deep learning was supposed to work. They called the behavior grokking.
www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/
|
|