Stats
  • Total Posts: 10261
  • Total Topics: 2988
  • Online Today: 285
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

AGI or Artificial General Intelligence

  • 8 Replies
  • 713 Views
*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
AGI or Artificial General Intelligence
« on: October 15, 2024, 06:02:31 AM »
What is the difference between artificial intelligence and artificial general intelligence?

Artificial general intelligence (AGI) is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for.

Current artificial intelligence (AI) technologies all function within a set of pre-determined parameters. For example, AI models trained in image recognition and generation cannot build websites. (AGI) is a theoretical pursuit to develop AI systems that possess autonomous self-control, a reasonable degree of self-understanding, and the ability to learn new skills. It can solve complex problems in settings and contexts that were not taught to it at the time of its creation. AGI with human abilities remains a theoretical concept and research goal.

https://aws.amazon.com/what-is/artificial-general-intelligence/#:~:text=AI%20is%20thus%20a%20computer,human%20being%2C%20without%20manual%20intervention.

AI is thus a computer science discipline that enables software to solve novel and difficult tasks with human-level performance. In contrast, an AGI system can solve problems in various domains, like a human being, without manual intervention.

What is the difference between artificial intelligence and artificial general intelligence?
Over the decades, AI researchers have charted several milestones that significantly advanced machine intelligence—even to degrees that mimic human intelligence in specific tasks. For example, AI summarizers use machine learning (ML) models to extract important points from documents and generate an understandable summary. AI is thus a computer science discipline that enables software to solve novel and difficult tasks with human-level performance.

In contrast, an AGI system can solve problems in various domains, like a human being, without manual intervention. Instead of being limited to a specific scope, AGI can self-teach and solve problems it was never trained for. AGI is thus a theoretical representation of a complete artificial intelligence that solves complex tasks with generalized human cognitive abilities.

Some computer scientists believe that AGI is a hypothetical computer program with human comprehension and cognitive capabilities. AI systems can learn to handle unfamiliar tasks without additional training in such theories. Alternately, AI systems that we use today require substantial training before they can handle related tasks within the same domain. For example, you must fine-tune a pre-trained large language model (LLM) with medical datasets before it can operate consistently as a medical chatbot.

What is AGI capable of?
Artificial general intelligence (AGI): AGI possesses human-like intelligence and can perform any intellectual task that a human can. It is capable of learning, reasoning, and adapting to new situations. Currently, true AGI does not exist, but research and development efforts are ongoing.

at would AGI be used for?
Adjusted gross income, or AGI, is extremely important for filing your annual income taxes. More specifically, it appears on your Form 1040 and helps determine which deductions and credits you are eligible for. Based on the amount of your AGI, you can then figure out how much you'll owe in income taxes.

What is an example of AGI?
What is Artificial General Intelligence? Definition from ...
An example of this includes grabbing a set of keys from a pocket, which involves a level of imaginative perception. Natural language understanding (NLU). The meaning of human language is highly context-dependent. AGI systems would possess a level of intuition that would enable NLU.

How close are we to artificial general intelligence?
Google DeepMind's co-founder Shane Legg said in an interview with a tech podcaster that there's a 50% chance that AGI can be achieved by 2028. He had publicly announced the same in his blog in 2011. While speaking about his vision behind xAI, Elon Musk predicted a full AGI by 2029.


AGI: Software for digital mission engineering

https://www.agi.com/


#58 Dr. Ben Goertzel - Artificial General Intelligence




#57 - Prof. MELANIE MITCHELL - Why AI is harder than we think




The AI Bubble: Will It Burst, and What Comes After?




Open-Ended AI: The Key to Superhuman Intelligence?




Is AGI Just a Fantasy?



« Last Edit: October 15, 2024, 06:09:13 AM by droidrage »

*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #1 on: October 15, 2024, 06:02:48 AM »
Eray Ozkural's Examachine

log.examachine.net
Turning science fiction into science fact.

Examachine is an AGI Platform that features exa-scale universal machine learning beyond human-level and it is developed by Celestial Intellect Cybernetics.

2009-2024

Categories
Artificial Intelligence
Cryptocurrency
Finance
Free Software
Link
OCaml
Philosophy
Physics
Python
Transhumanism
Uncategorized

Website: https://examachine.net/blog/
Youtube: https://www.youtube.com/c/Eray%C3%96zkural


#034 Eray Özkural- AGI, Simulations & Safety




AGI-14 Eray Özkural - Stochastic Context Sensitive Grammar Induction to Transfer Learning




15 Eray Özkural - Ultimate Intelligence: Physical Completeness and Objectivity of Induction




AGI-16 Tutorial by Eray Özkural - Incremental Machine Learning in Artificial General Intelligence




AGI-16 Eray Özkural - Ultimate Intelligence: Physical Complexity and Limits of Inductive Inference




Solomonoff Memorial 2011 - Eray Ozkural - Diverse Consequences of Algorithmic Probability




Abstract Representations and Generalized Frequent Pattern Discovery





034 Eray Özkural- AGI, Simulations & Safety

https://www.reddit.com/r/agi/comments/1bpdlm6/034_eray_%C3%B6zkural_agi_simulations_safety/
« Last Edit: October 30, 2024, 07:28:17 PM by Administrator »

*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #2 on: October 15, 2024, 10:11:31 AM »
What will AGI look like? A world run by AI.




Anthropic CEO : AGI is Closer Than You Think! (machines of loving grace)




Machines Of Loving Grace - Butterfly Wings




AGI Is Humanity’s Last Invention: How Close Are We? Full Timeline

« Last Edit: October 15, 2024, 10:17:00 AM by Administrator »

*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #3 on: October 15, 2024, 10:14:10 AM »
How to prepare yourself for AGI.




AGI in 3 to 8 years




After AGI




Holy Grail of AI (Artificial Intelligence) - Computerphile

« Last Edit: October 18, 2024, 06:49:44 AM by droidrage »

*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #4 on: October 18, 2024, 06:21:35 AM »
Omega An Architecture for AI Unification - Eray Ozkural




Eray Özkural - What is it like to be a brain simulation - Oxford Winter Intelligence




Artificial intelligence solutions with Eray Ozkural




Intelligence Explosion (older version)




SUPER TECH SOCIETY/ PHD ERAY ÖZKURAL - EP 93. DEBT NATION




WILL AI DOMINATE HUMANITY?/AI Expert-Eray Ozkural, Candidate Johannon Ben Zion- EP38. DEBT NATION




Blockchain Days - 4th Generation Cryptocurrencies: Overcoming the Blockchain Trilemma (Eray Özkural)


« Last Edit: October 18, 2024, 06:28:50 AM by droidrage »

*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #5 on: October 30, 2024, 07:42:14 PM »
Artificial General Intelligence (AGI) Simply Explained




8 Use Cases for Artificial General Intelligence (AGI)




Why Artificial General Intelligence (AGI) Matters — and Why It Doesn’t


*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #6 on: October 30, 2024, 07:54:58 PM »
Artificial General Intelligence (AGI) | Difference Between AI And AGI | AGI Explained | Simplilearn




What's the Difference: AI and AGI?




The Exciting, Perilous Journey Toward AGI | Ilya Sutskever | TED




Artificial General Intelligence by 2030? Riveting presentation by Futurist Gerd Leonhard




The Future of Artificial General Intelligence “AGI” with Peter Voss


*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #7 on: November 14, 2024, 03:46:03 AM »
Sam Altman on when AGI will be created | Lex Fridman Podcast




The code for AGI will be simple | John Carmack and Lex Fridman




The Transformative Potential of AGI — and When It Might Arrive | Shane Legg and Chris Anderson | TED




The Exciting, Perilous Journey Toward AGI | Ilya Sutskever | TED









The Road to UBI (Post AGI).

« Last Edit: November 14, 2024, 04:00:09 AM by Administrator »

*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: AGI or Artificial General Intelligence
« Reply #8 on: November 27, 2024, 11:02:52 PM »
Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too

https://time.com/7093792/ai-artificial-general-intelligence-risks/



Artificial General Intelligence—machines that can learn and perform any cognitive task that a human can—has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it’s an impending reality that demands our immediate attention.

On Sept. 17, during a Senate Judiciary Subcommittee hearing titled “Oversight of AI: Insiders’ Perspectives,” whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University’s Center for Security and Emerging Technology, testified that, “The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence.” She continued that leading AI companies such as OpenAI, Google, and Anthropic are “treating building AGI as an entirely serious goal.”

Toner’s co-witness William Saunders—a former researcher at OpenAI who recently resigned after losing faith in OpenAI acting responsibly—echoed similar sentiments to Toner, testifying that, “Companies like OpenAI are working towards building artificial general intelligence” and that “they are raising billions of dollars towards this goal.”

All three leading AI labs—OpenAI, Anthropic, and Google DeepMind—are more or less explicit about their AGI goals. OpenAI’s mission states: “To ensure that artificial general intelligence—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Anthropic focuses on “building reliable, interpretable, and steerable AI systems,” aiming for “safe AGI.” Google DeepMind aspires “to solve intelligence” and then to use the resultant AI systems “to solve everything else,” with co-founder Shane Legg stating unequivocally that he expects “human-level AI will be passed in the mid-2020s.” New entrants into the AI race, such as Elon Musk’s xAI and Ilya Sutskever’s Safe Superintelligence Inc., are similarly focused on AGI.

Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last month’s hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are “folks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe don’t have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.”


Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. “The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It’s very far from science fiction. It’s here and now—one to three years has been the latest prediction,” he said. He didn’t mince words about where responsibility lies: “What we should learn from social media, that experience is, don’t trust Big Tech.”

The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGI’s imminence. In a July 2023 survey conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed “within the next 5 years.” Some 82% of respondents also said we should “go slowly and deliberately” in AI development.

That’s because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of “novel biological weapons,” and Toner warned that many leading AI figures believe that in a worst-case scenario AGI “could lead to literal human extinction.”

Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?

First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.

Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into what’s going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI weren’t a possibility, but the prospect of AGI heightens their importance.

In a particularly concerning part of Saunders’ testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to “bypass access controls and steal the company’s most advanced AI systems, including GPT-4.” This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.

Finally, public engagement is essential. AGI isn’t just a technical issue; it’s a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.

No one knows how long we have until AGI—what Senator Blumenthal referred to as “the 64 billion dollar question”—but the window for action may be rapidly closing. Some AI figures including Saunders think it may be in as little as three years.

Ignoring the potentially imminent challenges of AGI won’t make them disappear. It’s time for policymakers to begin to get their heads out of the cloud.