May 2, 2023 8:06 am
human-level-ai-626b9fb53ed3b-sej.jpg

Meta Announces Research To Create Human-Level AI

Meta Announces Research To Create Human-Level AI

Facebook parent company Meta announced a new research project with the goal of creating human-level AI that processes data like humans

Is the CICERO AI designed to deceive humans?


Please respond to this survey ❤️ https://forms.gle/7vbM8ZYzrrzQNAos5

Follow the podcast and newsletter here:
– Podcast: https://podcast.apartresearch.com (general) and https://open.spotify.com/show/0h3WOsgUm9Lvd793VHZGrV?si=de65c8273e3b4de8 (spotify)
– Newsletter: https://newsletter.apartresearch.com/

**Opportunities**

Conjecture looks to be rapidly upscaling and are hiring for both technical and non-technical positions. As they write in the post: “Our culture has a unique flavor. On our website we say some spicy things about hacker/pirate scrappiness, academic empiricism, and wild ambition. But there’s also a lot of memes, rock climbing, late-night karaoke, and insane philosophizing.”
https://ais.pub/conj2

If you are not in for a job at Conjecture, you can also take a look at the program: AI safety Mentors and Mentess, that aims to match mentors and mentees to upscale their AI safety work. The program is designed to be “very flexible and lightweight and expected to be done next to a current occupation.
https://ais.pub/mentor

We also want to drop a note on the pre-announcement of Open Philantrophy’s AI Worldviews Contest that is meant to take place in the early 2023. More info can be found on the EA-forum even though the information is still quite sparse

Finally, Apart received a mail that pointed our attention to the newly launched AI Alignment Awards. The Awards aim to offer up to $100,000 to anyone who can make progress on two open problems in the field of AI Alignment research. Give their website a visit if you feel like this is something for you!
https://www.alignmentawards.com/

**Sources**
Meta AI announces Cicero able to play the board game Diplomacy better than humans – https://www.science.org/doi/10.1126/science.ade9097
Clarifications on the ‘wireheading term’
https://www.alignmentforum.org/posts/REesy8nqvknFFKywm/clarifying-wireheading-terminology
The “loss of control” scenario rests on a few key assumptions that are not justified by our current understanding of artificial intelligence research
https://windowsontheory.org/2022/11/22/ai-will-change-the-world-but-wont-take-it-over-by-playing-3-dimensional-chess/
Alignment Research Center: When deduction suddenly becomes deceptive. Formalizing presumptions of independence
https://arxiv.org/abs/2211.06738
Monosemanticity in neurons responding is great for interpretability
https://arxiv.org/abs/2211.09169 (https://www.alignmentforum.org/posts/LvznjZuygoeoTpSE6/engineering-monosemanticity-in-toy-models)
In case we need some more thoughts on EA’s relation to funders
https://www.lesswrong.com/posts/p4XpZWcQksSiCPG72/sadly-ftx#The_Future_of_Effective_Altruist_Ethics &
https://forum.effectivealtruism.org/posts/NeK9XYY2mDsH5bJdD/our-recommendations-for-giving-in-2022
Comparing AI Alignment research to orthodox and reform religions
https://www.lesswrong.com/posts/XKraEJrQRfzbCtzKN/distillation-of-how-likely-is-deceptive-alignment
Conjecture report
https://www.lesswrong.com/posts/bXTNKjsD4y3fabhwR/conjecture-a-retrospective-after-8-months-of-work-1
AlphaGo beating Ke Jie in GO (5 years ago)
https://www.bbc.com/news/technology-40042581

CICERO the shocking new AI from META


Hi, thanks for watching our video about Transformer Models
In this video we’ll walk you through:
– Cicero
– smart AI
– MetaAI
– advanced ai
– Deep Learning
– ChatGPT
– Transformer Models
– diplomacy
– video games

In this video, we dive deep into the world of Cicero model from MetaAI. CICERO: An AI agent that negotiates, persuades, and cooperates with people. CICERO demonstrated this by playing on webDiplomacy.net, an online version of the game, where CICERO achieved more than double the average score of the human players and ranked in the top 10 percent of participants who played more than one game.

#artificialintelligence #robot #machinelearning #ai #elonmusk #metaai #Facebook #bart #mlmodels #languagemodels #datascience #100daysofmlcode

deep learning, machine learning, arxiv, explained, neural networks, ai, artificial intelligence, paper, what is deep learning, introduction to deep learning, deep learning tutorial, meta, meta ai, meta cicero, cicero ai, meta cicero ai, diplomacy ai, web diplomacy, facebook ai, fair ai, language model, politics ai, geopolitics ai, ai online game

chatgpt,
gpt4,
google AI,
Google Lamda,
Google PALM,
artificial intelligence,
Machine learning,
transformer models,
deep learning,
what are transformer models ?

#artificialintelligence #chatgpt #Lamda #google #ai #machinelearning #dalle2 #gpt3 #gpt4 #futuretech #futureishere #technology #languagemodels #BIRT #ai

ABOUT OUR CHANNEL
Our channel is about AI. We cover lots of cool stuff such as Artificial Intelligence, Robotics and future tech
Check out our channel here:
https://www.youtube.com/aipowered
Don’t forget to subscribe!

CHECK OUT OUR OTHER VIDEOS
https://www.youtube.com/watch?v=UIpdMPDlxlg
https://www.youtube.com/watch?v=NUfJcqSl31I
https://www.youtube.com/watch?v=1jiO23aeKQM

LINKS USED:

Google cloud : https://youtu.be/SZorAJ4I-sA
IBM cloud : https://youtu.be/ZXiruGOCn9s
Lex : https://youtu.be/9uw3F6rndnA

GET IN TOUCH
Contact us on

FOLLOW US ON SOCIAL
Get updates or reach out to Get updates on our Social Media Profiles!
Twitter:
Facebook:
Instagram:
Spotify:

*Sources*
Meta AI announces Cicero able to play the board game Diplomacy better than humans – https://www.science.org/doi/10.1126/science.ade9097
Diplomacy overview- https://www.youtube.com/watch?v=prrOwPAsot8

Creating Human-Level AI: How and When | Ray Kurzweil


Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life Institute.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

For more information on the BAI ‘17 Conference:

https://futureoflife.org/ai-principles/

https://futureoflife.org/bai-2017/

https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/

Meta Takes Down Galactica Ai used for Scientific Research


After releasing Galactica Ai Demo on November 15th, Meta had to take it down on November 17th because of major issues with the information it produced.

Galactica Ai is supposed to be used for scientific research and help people organize the vast amount of information and research papers that have been released over the past few years. It’s supposed to be the search engine for research.

Galactica is 68% more powerful than GPT-3 and attempts to take Ai Generated content to a new level.

The problem Meta is trying to solve is: Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and the inconsequential.

Galactica is a powerful large language model (LLM) trained on over 48 million papers, textbooks, reference material, compounds, proteins and other sources of scientific knowledge.
It can be used to explore the literature, ask scientific questions, write scientific code, and much more.

Meta has invested billions of dollars in Artificial Intelligence and is trying to leapfrog OpenAI, but it seems as though the release of Galactica was a bit premature.

#meta #galactica #ai