Uncategorized

SHOULD YOU BE WORRIED IF YOU LAWYER USES ARTIFICIAL INTELLIGENCE (AI)

Artificial Intelligence, Blockchains, France, Yogi Nelson

A GLIMPSE AT MISTRAL—THE FRENCH LEADER IN THE AI REVOLUTION

Namaste Yogis.   Welcome to the Blockchain & AI Forum, where your blockchain and artificial intelligence technology questions are answered!   Here no question is too mundane.  As a bonus, a proverb is also included.  Today’s question, was submitted by Luis from Malibu and he wants to know is Mistral leading the French AI revolution?

Luis, you came to the right place. Your question has perfect timing. I just returned from France and am feeling the French vibe! Rather than covering the entire waterfront of AI developments in France, I’ll limit myself to a French company named Mistral.  Let’s start with a bit of background on Mistral, including its name which is French to the core–c’est la vie!

The word mistral has multiple uses in French.  Not only does Mistral mean a cold northwesterly wind, but it’s also not uncommon for companies in industries related to wind energy, sailing, or the environment to use it in their name. The French say it evokes a sense of power, speed, and a connection to the natural world. It can also mean masterly.  While traveling in France, I noticed Mistral as a last name as we say in the USA or as they say in France, the family name.  Holy multi-tasking word, Batman!  Now a word about the Mistral team and company mission.

With fewer than two dozen members, the Mistral team is small.  According to Mistral, their mission is to make frontier AI ubiquitous, and to provide tailor-made AI to all developers.  Mistral says, their mission requires fierce independence, strong commitment to open, portable, and custom solutions, and an extreme focus on shipping the most advanced technology in limited time.  How I discovered Mistral comes next, followed by what Mistral does, and what it offers.

Approximately three months ago I discovered Mistral while researching AI projects for this blog.  I attempted access but was denied.  Instead, Mistral put me on a waiting list of interested users and recently granted me access with a full disclosure that their large language model (LLM) is in beta testing status. 

Today, I put Mistral’s LLM, Le Chat, to the test against, two American companies—Open AI’s ChatGPT and Microsoft’s Co-Pilot– by asking all three identical questions.  All three-offer free access to their LLM services.  I won’t detail the contest; it’s not necessary.  Essentially, Mistral is lacking behind the competition.  Why Mistral trails, is a matter of speculation.  Perhaps its funding related?  Or maybe Mistral team size means it’s too small to compete with the titans? Could be they started later?  I don’t know, but I do know the competition is ahead.  Okay, now we go beyond their LLM to a preview of Mistrals AI products. 

On their website, Mistral has a click option labeled La Plataforme (the Platform).  This is where you’ll find the heart of their offerings and services.  Basically, La Plataforme is a subscription service.  For a fee, Mistral offers access to their AI developer tools. Mistral claims the tools permits users to develop AI agents and other related products.  The subscription grants access to the latest Mistral models and to pay based on what you use.  That’s a good feature.  Moreover, users can set monthly spending limits and if there are multiple users with an enterprise, Mistral will centralize billing.  The subscription grants access to the corresponding documentation and of course users can create API keys to access Mistral AI.

Time to say “au revoir” (goodbye) but not before sharing this French proverb:  Only imbeciles don’t change their opinions.”   Well said, my French friends!

Sincerely,

Yogi Nelson

Artificial Intelligence, Blockchains, Patents

How are DARPA, Explainable Artificial Intelligence and Nvidia Connected? 

Namaste Yogis.   Welcome to the Blockchain & AI Forum, where your technology questions are answered!   Here no question is too mundane.  As a bonus, a proverb is also included.  Today’s question, comes from Susan in Irvine and she ask how are DARPA, Explainable Artificial Intelligence (XAI) and Nvidia all connected?

Susan, you came to the right place.  As a wealth manager I know you have a fiduciary duty to maintain current on investable technology on behalf of your clients.  Around 1990 a new technology was on the horizon–the internet–and it created trillions in new wealth!  Artificial intelligence (AI) is on a similar trajectory.  However, AI is not well understood–yet.  Let’s take a moment to acquaint ourselves with AI from the perspective of DARPA. In hindsight, DARPAs XAI program could have been used to foresee a tremendous investment opportunity—Nvidia the advanced computer chip maker essential to the AI business.

DARPA is an acronym that stands for Defense Advanced Research Projects Agency.  DARPA is part of the Department of Defense (DoD) and its mission is development of emerging technologies for military use.  DARPA created the internet in the late 1960s to facilitate control and command of military communications.  Under DARPA leadership, the US military maintains technological superiority over all other nations. 

In 2015 DARPA released an article titled, Explainable Artificial Intelligence (XAI).  https://www.researchgate.net/publication/356781652_DARPA_’s_explainable_AI_XAI_program_A_retrospective     Remember the XAI project was undertaken in 2015–years before the public had access to ChatGPT and other similar products.   Let’s examine the research findings.

According to DARPA in 2015, “…dramatic success in machine learning will lead to numerous AI applications.  It appears AI will eventually produce autonomous systems that will perceive, learn, decide, and act on their own”, DARPA predicted.  However, DARPA was concerned the effectiveness of AI systems would be stymied unless and until machines can creditably explain their decisions and actions to human users. Holy transparency, Batman! Therefore, the XAI program was intended to create a suite of machine learning techniques that would:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

XAI program was focused on the development of multiple systems by addressing challenges in two areas: 

  • machine learning problems to classify events based on multimedia data  
  • machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions

DARPA selected these two areas because both represented major operational challenges.  The first relates to classification and reinforcement learning and the second is centered on intelligence analysis and autonomous systems.  In other words, DARPA was working on building AI machines that understand the context and environment in which they operate, and over time allow them to characterize real world phenomena precisely and in real time.  Basically, DAPRA was developing AI-Gen 2 before the public had access to AI-Gen 1! Holy Explainable AI, Batman! 

DARPA released XAI seven public findings of the XAI program and kept an unknown number of classified secret findings! LOL. Holy keep it on the down-low, Batman! LOL!

  1. Users prefer systems that include explanations rather than just answers (no surprise)
  2. To improve end results the task must be sufficiently difficult that the XAI explanation helps (makes sense)
  3. User cognitive load to interpret explanations can hinder user performance (in other words there are times when too much explanation is a negative)
  4. XAI is more helpful in edge cases
  5. Explanation effectiveness can change over time
  6. XAI combined with an advisor helps (no kidding, LOL)
  7. XAI is effective for aligning mental models

What’s are the two takeaways?  Obviously, DARPA did not leak military secrets on the internet and only disclosed the minimum. However, DARPA did leave an interesting information trail in 2015 that had we followed it, could have been a monster opportunity.  The XAI 2015 article was a glimpse into the future and we might have forecasted the rise of Nvidia.  Holy unicorn, Batman!  Nvidia stock has since soared, and we could have, should have…

I end with a Russian proverb: trust but verify!

Until next time,

Yogi Nelson

Artificial Intelligence, Yogi Nelson

IS IT POSSIBLE TO EARN $10M PROVING WEAK AI CAN SUPERVISE STRONG AI?

Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is included. Today’s question is from Victor, in Altadena, CA. He wants to know if he can earn $10M by developing a Weak artificial intelligence (AI) system capable of supervising a Strong AI.

Victor, you came to the right place. On December 14th, Open AI released a research paper titled, Weak-to-Strong Generalization (system) in which they explored the very question you asked!

https://openai.com/research/weak-to-strong-generalization

Open AI starts with a premise; the premise is Strong AI systems will be vastly more intelligent than humans in 10 years. Holy left behind Batman! Open AI says our current method of controlling AI relies on human supervision. However, future AI systems (Strong AI) will be capable of extremely complex and creative behaviors making it difficult for humans to reliably supervise them. Sounds like a sci-fi horror movie! Based on that premise, OpenAI has the following hypothesis–humans can control weak AI systems and weak AI systems can be designed, built, and used to control much stronger AI systems. Therefore, controlling Strong AI systems in the future should be possible. OpenAI says their hypothesis has been proven correct in a lab setting. Let’s unpack their research and findings.

I’ll quote from the report. “The research suggest there remains important disanalogies between our current empirical setup and the ultimate problem of aligning superhuman models.” In other words, more work is needed.

Open AI says, it may be easier for future models to imitate weak human errors than for current strong models to imitate current weak model errors, which could make generalization (control) harder in the future. Nevertheless, we believe our setup captures some key difficulties of aligning future superhuman models, enabling us to start making empirical progress on this problem today.” Said succinctly, we know managing superhuman models is likely to become a major issue, but we are working on it! Are you reassured?

Let’s examine the three findings:

Finding 1: When ChatGPT2 was used to supervise GPT-4, the resulting model typically performs somewhere between GPT-3 and GPT-3.5 and Open AI was able to recover much of GPT-4’s capabilities employing weaker supervision.

Finding 2: Naive human supervision, such as reinforcement learning from human feedback, could scale poorly to superhuman models without further work.

Finding 3: It’s feasible to substantially improve weak-to-strong generalization.

After discovering that Weak AI in the lab was able to control Strong AI, Open AI says it is committed to undertake two actions.

To start, AI has committed to releasing open source code to make it easy to get started with weak-to-strong generalization experiments today. In other words, the computer codes used in the research will be publicly available to researchers thus putting more brain power to work. Fantastic! https://openai.com/blog/superalignment-fast-grants

Second, OpenAI will launch a $10 million grants program for graduate students, academics, and other researchers to work on superhuman AI alignment broadly. https://openai.com/blog/superalignment-fast-grants. Outstanding!

In summary, the Open AI report said, 1) managing future superhuman AI models is likely to be a problem; 2) preliminary research suggest Weak AI could manage Strong AI models; 3) more research is needed; and 4) $10M in grants is available to any one capable of designing a Weak AI system capable of controlling a Strong AI system.

We end with a proverb from Mali, where they say: When an old woman dies, a library burns to the ground.

Until next time,

Yogi Nelson

Artificial Intelligence, Yogi Nelson

WHAT IS THE STATE OF AFFAIRS IN ARTIFICIAL INTELLIGENCE SYSTEMS?

Namaste Yogis. Welcome to the Blockchain & AI Forum, where your technology questions are answered, mostly correct! Here no question is too mundane. As a bonus, a proverb is also included. Today’s question, comes from Richard, in Los Angeles, and he wants to know what is the state of affairs in artificial intelligence systems?

Richard, you came to the right place. There has been an explosion in recent months of companies announcing large language model (LLMs) AI projects. For this article I’ll refer to LLMs as AI systems. Let’s examine the situation by asking various AI systems to introduce themselves by answering this question:

   Can you please describe yourself and your capabilities in 50 words or less?

Below I cut and paste their respective responses, where available, verbatim.

ChatGPT by Open AI. OpenAI is the California based software company behind ChatGPT.

“I am ChatGPT, a sophisticated AI language model created by OpenAI. With extensive training on diverse topics, I excel in generating human-like text, answering queries, and providing information. My abilities encompass a wide range of subjects, making me a versatile tool for communication and knowledge sharing.”

Bard by Google. Bard is a product of Google, a California company that needs no introduction.

“I’m Bard, a brainy AI from Google. I can chat, write, translate, and more, using knowledge from the real world to fuel your words. Ask me anything! (49 words)”

Notice Bard used 27 words but claims it used 49! Apparently, Bard cannot count! Lol.

Co-Pilot by Microsoft. Microsoft calls its AI system Co-Pilot or Bing which is the name of Microsoft’s search engine.

“Certainly! I am Bing, an AI language model. My capabilities include generating creative content, answering questions, providing summaries, and more. I can write poems, stories, code, and engage in interesting conversations. Feel free to ask, and I’ll do my best to assist!”

Mistral: Mistral is an AI company from Paris. Their website says the project is under construction. I tried to log on, and instead received an email indicating access was by invitation only! Ouch! How rude! Or as they say in France, Impoli! Lol! I don’t want to be a member of a club that would have me as a member! Lol!

Olympus by Amazon. According to Reuters, Amazon is investing millions in training an ambitious large language model, expecting to surpass ChatGPT and Bard. As with Mistral, Amazon is late. Perhaps Jeff needs to unretire and get back to work! Lol!

Gauss by Samsung. Samsung is tardy. When ready, Samsung says its LLM will have the capabilities to generate text, computer code, and images based on machine learning technology. It is not just one model, but a family of models, Samsung Chairman Lee explained to Business Korea on December 18, 2023.

Grok by X. Grok is in beta testing. Why? Maybe Elon is busy on his rocket ship or pumping garbage crypto meme coins. Lol! On November 2, Musk tweeted “Grok is intended to answer almost anything and suggest what questions to ask.” Musk claims Grok will possess a sense of humor, hopefully not his. Lol! What makes Grok superior according to Musk is “… Grok has real-time knowledge of the world via the X platform and will answer ‘spicy questions’ that are rejected by most other AI systems.” Knowledge on X? I guess that’s the humor! Lol!

I will close with a proverb from Togo: A man that does not lie shall never marry. Brother, you got that right for sure! Lol, lol, lol!
.
Until next time,

Yogi Nelson