Artificial Intelligence, Yogi Nelson

IS BULGARIA A LEADER IN ARTIFICIAL INTELLIGENCE EDUCATION?

Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question, comes from Fernie, in Las Vegas, and he wants to know if Bulgaria is a leader in artificial intelligence education and research?

Fernie, you came to the right place. Rather than answer directly, I will share information about a certain institute located in Sofia, Bulgaria, and let you decide. Let’s jump right into it by shouting out the Institute for Computer Science and Artificial Intelligence Technology, INSAIT.
https://insait.ai/

INSAIT is a mere toddler; it was founded in April 2022. INSAIT is the first university in Eastern Europe dedicated exclusively to computer science and AI technology education. INSAIT is part of Sofia University, but functions autonomously. Holy independence, Batman!

INSAIT is backed by technology elites, including two of Switzerland’s most prestigious universities and global technology behemoths, i.e., Google, DeepMind, and VMware, etc. INSAIT says its sole focus is on scientific excellence, conducting world-class research, attracting outstanding international scientist, and training the next generation of technology leaders. Let’s peruse their course offerings, beginning with machine learning.

Machine Learning. INSAIT defines machine learning as the area of computer science that focuses on extracting knowledge from large data sets and using this knowledge for making predictions and decision making. INSAIT believes the combination of colossal data sets and increasing access to powerful computation means there are exciting opportunities for data-driven algorithms.

Computer Vision. According to INSAIT, computer vision is an interdisciplinary field that studies algorithms through which computers can comprehend images at a high level. INSAIT says, and with good reason, there are numerous applications for computer vision, including autonomous driving, robotics, object recognition, gesture analysis, tracking, scene understanding, medical images, and embodied AI. Will AI be peeking through my window? LOL.

Quantum Computing (QC). Quantum computing is hyper computing. QC can leverage the properties of quantum states. Look out!!! QC is revolutionary and will make what is now impossible or impractical, possible in seconds. Holy no speed limits, Batman! INSAIT is focused on the software side of QC, including programming languages, optimization, verification, and algorithms.

Cyber Security. INSAIT intends to become a cyber security tour-de-force. INSAIT says it will focus on cryptography and developing new security protocols, including communication systems. Holy bat shield, Batman!

Automated Reasoning. Reasoning is what makes human intelligence and allows us to draw conclusions based on logic, says INSAIT. Not surprisingly, INSAIT has a focus on automated reasoning related to fundamental math and logic questions. Their automated reasoning program examines decision making under uncertain conditions. Fabulous! All of life works under uncertain conditions! LOL!

Natural Language Processing. Speech perception is central to human language. According INSAIT, natural language process is the study of computational techniques for automated processing and analysis of speech and text. INSAIT aims to develop principled algorithms for processing massive amounts of text and speech data and insights based on linguistics. INSAIT keep in mind, the spoken word is only half of all communication!

INSAIT also supports graduate studies in data management, algorithms, and computer programming languages. A complete buffet of AI graduate education for the amazing price of $36,000 per year, including room and board. If I were 22 again I would … but I’m closer to 122 LOL.

Time to end with a proverb from Bulgaria, where they say, “A slip of the foot is not nearly as dangerous as a slip of the tongue.

Until next time,

Yogi Nelson

Artificial Intelligence, Yogi Nelson

IS IT POSSIBLE TO EARN $10M PROVING WEAK AI CAN SUPERVISE STRONG AI?

Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is included. Today’s question is from Victor, in Altadena, CA. He wants to know if he can earn $10M by developing a Weak artificial intelligence (AI) system capable of supervising a Strong AI.

Victor, you came to the right place. On December 14th, Open AI released a research paper titled, Weak-to-Strong Generalization (system) in which they explored the very question you asked!

https://openai.com/research/weak-to-strong-generalization

Open AI starts with a premise; the premise is Strong AI systems will be vastly more intelligent than humans in 10 years. Holy left behind Batman! Open AI says our current method of controlling AI relies on human supervision. However, future AI systems (Strong AI) will be capable of extremely complex and creative behaviors making it difficult for humans to reliably supervise them. Sounds like a sci-fi horror movie! Based on that premise, OpenAI has the following hypothesis–humans can control weak AI systems and weak AI systems can be designed, built, and used to control much stronger AI systems. Therefore, controlling Strong AI systems in the future should be possible. OpenAI says their hypothesis has been proven correct in a lab setting. Let’s unpack their research and findings.

I’ll quote from the report. “The research suggest there remains important disanalogies between our current empirical setup and the ultimate problem of aligning superhuman models.” In other words, more work is needed.

Open AI says, it may be easier for future models to imitate weak human errors than for current strong models to imitate current weak model errors, which could make generalization (control) harder in the future. Nevertheless, we believe our setup captures some key difficulties of aligning future superhuman models, enabling us to start making empirical progress on this problem today.” Said succinctly, we know managing superhuman models is likely to become a major issue, but we are working on it! Are you reassured?

Let’s examine the three findings:

Finding 1: When ChatGPT2 was used to supervise GPT-4, the resulting model typically performs somewhere between GPT-3 and GPT-3.5 and Open AI was able to recover much of GPT-4’s capabilities employing weaker supervision.

Finding 2: Naive human supervision, such as reinforcement learning from human feedback, could scale poorly to superhuman models without further work.

Finding 3: It’s feasible to substantially improve weak-to-strong generalization.

After discovering that Weak AI in the lab was able to control Strong AI, Open AI says it is committed to undertake two actions.

To start, AI has committed to releasing open source code to make it easy to get started with weak-to-strong generalization experiments today. In other words, the computer codes used in the research will be publicly available to researchers thus putting more brain power to work. Fantastic! https://openai.com/blog/superalignment-fast-grants

Second, OpenAI will launch a $10 million grants program for graduate students, academics, and other researchers to work on superhuman AI alignment broadly. https://openai.com/blog/superalignment-fast-grants. Outstanding!

In summary, the Open AI report said, 1) managing future superhuman AI models is likely to be a problem; 2) preliminary research suggest Weak AI could manage Strong AI models; 3) more research is needed; and 4) $10M in grants is available to any one capable of designing a Weak AI system capable of controlling a Strong AI system.

We end with a proverb from Mali, where they say: When an old woman dies, a library burns to the ground.

Until next time,

Yogi Nelson

Artificial Intelligence, Yogi Nelson

WHAT IS THE STATE OF AFFAIRS IN ARTIFICIAL INTELLIGENCE SYSTEMS?

Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question, comes from Richard, in Los Angeles, and he wants to know what is the state of affairs in artificial intelligence systems?

Richard, you came to the right place. There has been an explosion in recent months of companies announcing large language model (LLMs) AI projects. For this article I’ll refer to LLMs as AI systems. Let’s examine the situation by asking various AI systems to introduce themselves by answering this question:

   Can you please describe yourself and your capabilities in 50 words or less?

Below I cut and paste their respective responses, where available, verbatim.

ChatGPT by Open AI. OpenAI is the California based software company behind ChatGPT.

“I am ChatGPT, a sophisticated AI language model created by OpenAI. With extensive training on diverse topics, I excel in generating human-like text, answering queries, and providing information. My abilities encompass a wide range of subjects, making me a versatile tool for communication and knowledge sharing.”

Bard by Google. Bard is a product of Google, a California company that needs no introduction.

“I’m Bard, a brainy AI from Google. I can chat, write, translate, and more, using knowledge from the real world to fuel your words. Ask me anything! (49 words)”

Notice Bard used 27 words but claims it used 49! Apparently, Bard cannot count! Lol.

Co-Pilot by Microsoft. Microsoft calls its AI system Co-Pilot or Bing which is the name of Microsoft’s search engine.

“Certainly! I am Bing, an AI language model. My capabilities include generating creative content, answering questions, providing summaries, and more. I can write poems, stories, code, and engage in interesting conversations. Feel free to ask, and I’ll do my best to assist!”

Mistral: Mistral is an AI company from Paris. Their website says the project is under construction. I tried to log on, and instead received an email indicating access was by invitation only! Ouch! How rude! Or as they say in France, Impoli! Lol! I don’t want to be a member of a club that would have me as a member! Lol!

Olympus by Amazon. According to Reuters, Amazon is investing millions in training an ambitious large language model, expecting to surpass ChatGPT and Bard. As with Mistral, Amazon is late. Perhaps Jeff needs to unretire and get back to work! Lol!

Gauss by Samsung. Samsung is tardy. When ready, Samsung says its LLM will have the capabilities to generate text, computer code, and images based on machine learning technology. It is not just one model, but a family of models, Samsung Chairman Lee explained to Business Korea on December 18, 2023.

Grok by X. Grok is in beta testing. Why? Maybe Elon is busy on his rocket ship or pumping garbage crypto meme coins. Lol! On November 2, Musk tweeted “Grok is intended to answer almost anything and suggest what questions to ask.” Musk claims Grok will possess a sense of humor, hopefully not his. Lol! What makes Grok superior according to Musk is “… Grok has real-time knowledge of the world via the X platform and will answer ‘spicy questions’ that are rejected by most other AI systems.” Knowledge on X? I guess that’s the humor! Lol!

I will close with a proverb from Togo: A man that does not lie shall never marry. Brother, you got that right for sure! Lol, lol, lol!
.
Until next time,

Yogi Nelson

Artificial Intelligence, Yogi Nelson

WHAT IS THE ARTIFICIAL INTELLIGENCE ALLIANCE?

Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question comes from Butch, in Arkansas, USA. Butch wants to know, what is the Artificial Intelligence Alliance?

Butch, you came to the right place. The Artificial Intelligence Alliance (AI Alliance) is an organization with an impressive membership roster. The AI Alliance consists of technology creators, developers and adopters collaborating to advance safe, responsible AI rooted in open innovation, according to their official statement. Who are the members? Click the link here: https://thealliance.ai/

AI Alliance membership is diverse and as expected concentrated in the tech sector. There are numerous prestigious universities, including U.C. Berkeley, Dartmouth, and the University of Texas. Moreover, the AI Alliance includes foreign universities in Japan and one in Dubai, exclusively for AI. Holy, big brain, Batman! Not surprisingly AI Alliance members are generally from the business community. For example, you will find Meta, Intel, IBM, Oracle, and many other household corporate names.

What does the AI Alliance do? The AI Alliance has three focus areas. Let’s take one at a time.

Responsible and Scalable AI. AI Alliance has set a goal of creating benchmarks, tools, and methodologies to ensure and evaluate high-quality and safe AI. The AI Alliance will also be an advocate for AI safety and security.

Open Foundation Models. The AI Alliance says it will focus on responsibly advancing the ecosystem of open foundation models with diverse modalities, including highly capable multilingual, multi-modal, and science models that can help address society-wide challenges in climate, education, and beyond.

Diverse AI Hardware. AI Alliance claims its members will collaborate on the benchmarking, optimization, and adaptation of AI workloads to advance a diverse set of hardware. AI Alliance reports it will focus on scalability, platform adaptation, and advanced power, energy, and carbon modeling. Benchmarks and metrics developed for model inference, fine-tuning and energy consumption of large-scale AI workloads will also be contributed to the open-source community.

Butch, you ask in what way do the members contribute? I’ll give you three of the eight ways AI Alliance members can contribute. We start with a pledge to deploy benchmarks, tools, and other resources that enable the responsible development and use of AI systems at global scale, including the creation of a catalog of vetted safety, security, and trust tools. Second AI Alliance members agree to responsibly advance the ecosystem of open foundation models with diverse modalities, including highly capable multilingual, multi-modal, and science models that can help address society-wide challenges in climate, human health, and beyond. And third, AI Alliance members say they will support global AI skills building, education, and exploratory research. Nothing short of ambitious!

The actual working of the AI Alliance is fascinating and deserves a short explanation. The AI Alliance will divide the work and its members into groups with specific tasks. To direct the work, the AI Alliance announced it will establish a governing board and technical oversight committee. The board and technical committee will establish overall project standards and guidelines. There is no mention of an Executive Director, dedicated staff, nor explicit political activities, yet. However, I am confident the AI Alliance will eventually establish a Washington, D.C. presence to lobby the executive branch, Congress, and regulators. That’s how it works, folks!

Butch, I hope I answered your question. Remember what they say in Italy: When you finish the game, the king and pawn end up in the same box.

Until next time,

Yogi Nelson

Artificial Intelligence

WHAT IS THE PAUL ALLEN INSTITUTE FOR ARTIFICIAL INTELLIGENCE

Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question comes from in Mark, Naperville, USA. Mark wants to know, what is the Paul Allen Institute for Artificial Intelligence?

Mark, you came to the right place. Let’s take a moment first to understand, who was Paul Allen. Mr. Allen made his fortune as a Microsoft co-founder. Perhaps his name sounds familiar from the Microsoft connection? Allen passed away in 2018 at age 65. Too young! Lol! Before Allen’s passing, he established three research institutions, including the AI research.

Paul Allen AI Institute, commonly known as AI2, was established in 2014. AI2 is in Seattle and Tel Aviv, Israel. The mission of AI2 is “…to contribute to humanity through high impact AI research and engineering.” Let’s dive into what AI2 produces–it generates plenty—beginning with Aristo.

Aristo. Aristo was AI2’s first project. The concept powering Aristo was to create a “Digital Aristotle” capable of answering users’ questions, and ultimately holding a discussion. Aristo was designed to understand elementary science by reading relevant text. In 2019 Aristo scored over 90% on a collection of multiple-choice, eight grade science exams. Better than me, lol! AI2 readily admits Aristo remains incapable of understanding science in a meaning way. Holy ancient Greek, Batman!

Semantic Scholars. Semantic Scholar accelerates scientific breakthroughs by helping scholars locate and understand key research results, make important connections, and overcome information overload. AI2 says Semantic Scholar is an AI-based search engine for scientific literature, including more than 200 million papers covering all scientific disciplines. It draws 10M users monthly. Semantic Scholar can decode PDF files to extract and understand the full-text content beyond titles, authors, and references to include figures and concepts. Semantic Scholar played a leading role in creating the Covid-19 open research dataset.

AllenNLP. AllenNLP executes groundbreaking research in natural language processing, including improving systems’ performance, accountability, and advancing scientific methodologies for evaluating and understanding systems.

PRIOR. PRIOR is the computer vision research team. The goal is advanced computer vision leading to AI systems that can perceive the world, reason about it, and act within it. PRIOR has three research priorities: 1) embodied AI systems capable of interacting with objects, humans, and other agents in the real world; 2) vision and language multi-modal systems that can perform tasks that require an understanding of visual and language data; and 3) computer vision for the common good, e.g., a platform that can detect illegal fishing vessels via satellite imagery.

Green AI. Green AI is dedicated to reducing the cost of training and using AI systems—less expensive AI systems are accessible to more practitioners and contribute less to climate change.

Mosaic. Mosaic was formed in 2018 with the goal of building AI with common sense. Common sense is the ability to acquire and use every day experiential knowledge. Obviously, Ais lack commonsense. Good luck with this one. Lol.

AI Environment. Applied AI for climate modeling and wildlife protection.
• Earth Ranger: Software that aids ecologists and wildlife biologists make more informed wildlife conservation decisions.
• Skylight: Helps reduce illegal, unreported, and unregulated fishing through technology that provides actionable intelligence for maritime enforcement.
• Climate Modeling: The goal is to improve the world’s understanding of climate change, its effects, and what can be done now.
• Wildlands: Machine learning to support wildland fire management.

We conclude with a Norwegian proverb: Behind the clouds, the sky is always blue.

Until next time,

Yogi Nelson