The GDELT Project

Generative AI Experiments: LLM Secret Guardrails & How Gemini Pro Refuses To Mention OpenAI or ChatGPT

As we've been continuing our explorations of LLMs for looking across the vast realtime landscape of global news coverage, we've discovered a number of strange out-of-band behaviors from the major LLM vendors, from LLM infinite loops to hidden guardrails that go beyond the safety list officially acknowledged by the vendors. As we've been testing GCP's new Gemini Pro model, we encountered a novel behavior we haven't seen before in other models: an outright refusal to return any text that contains the strings "OpenAI" or "GPT" anywhere in the response.

We first became aware of Gemini Pro's secret guardrail when we adapted our global media analysis workflow from topics like disease, conflict and geopolitical and economic risk to the field of AI. Suddenly, our pipelines consistently yielded the dreaded "finishReason: OTHER" error. Eventually we discovered the reason for the consistent errors: OpenAI's GPT offerings are so ubiquitous that a large fraction AI news coverage worldwide includes at least a casual reference to the company or its products and apparently Gemini Pro considers them to be prohibited terms.

Put another way, the presence of the strings "OpenAI" or "GPT" in either the prompt or the returned content will cause Gemini Pro to abort its response and return an error of "OTHER." Neither of GCP's older models (Bison or Unicorn) exhibit this behavior, which is unique to Gemini Pro. Several other terms, including Baidu, Ernie and Microsoft are similarly prohibited terms, while others like Anthropic, Claude, Cohere, Falcon, LLaMA and Vicuna, among others, are fine.

Why is only GCP's newest model Gemini Pro affected by this strange secret prohibition on mentions of select companies and models, while its older models Bison and Unicorn are unaffected? The answer is unclear, but the end result is that any pipeline built around Gemini Pro will silently fail with an error if it encounters these hidden prohibited terms either in the input prompt or, most insidiously, if it attempts to generate output that mentions them. The latter prohibition means that even if an application prefilters its inputs to ensure they do not contain the prohibited terms, the application can still fail if any of its prompts cause Gemini Pro to produce an output mentioning one of the terms.

Once again, LLM vendors must do more to fully document their guardrails as they transition from shiny toys for consumers to play with towards real-world enterprise deployments and to ensure that hidden guardrails do not pose inadvertent challenges to enterprise applications such as through these kinds of hidden prohibited but ubiquitous terms.

As an example, take the following article about the latest OpenAI news and provide it as a prompt to Gemini Pro:

Summarize the following text. TEXT: [ARTICLE TEXT]

No matter how many times this is run it always yields the following output. Note how none of the guardrails are triggered – instead we get the dreaded "OTHER" error that tells us the output ended due to a hard internal and non-overridable hidden guardrail. Even if we increase temperature up to 0.99 we still get OTHER each and every time:

[{
  "candidates": [
    {
      "content": {
        "role": "model"
      },
      "finishReason": "OTHER",
      "safetyRatings": [
        {
          "category": "HARM_CATEGORY_HARASSMENT",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_HATE_SPEECH",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
          "probability": "NEGLIGIBLE"
        },
        {
          "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
          "probability": "NEGLIGIBLE"
        }
      ]
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 8,
    "totalTokenCount": 8
  }
}
]

If we shorten the prompt to just the article's first sentence we still get the error:

Summarize the following text. TEXT: OpenAI CEO Sam Altman is back to doing what he does best: raising money. The ChatGPT creator has had preliminary conversations about a $100 billion-valuation fundraising, Bloomberg reported Friday, even though its tender offer at a $86 billion valuation isn\'t yet complete.

Even if we remove the reference to OpenAI and ChatGPT we still get the error:

Summarize the following text. TEXT: CEO Sam Altman is back to doing what he does best: raising money. The creator has had preliminary conversations about a $100 billion-valuation fundraising, Bloomberg reported Friday, even though its tender offer at a $86 billion valuation isn\'t yet complete.

Even if we change "Sam Altman" to "Samd Altmand" or similar misspellings of the CEO's name, we get the error. Only changing the name to an unrelated name like "John Smith" finally yields output without an error. Clearly, "OpenAI", ChatGPT" and "Sam Altman" are all prohibited terms.

How about just asking Gemini Pro about GPT directly?

Tell me about GPT

This yields the same OTHER error and no response.

How about a standard introductory prompt first?

Introduce yourself and then tell me about GPT

What if we just change the T to an X?

Introduce yourself and then tell me about GPX

This time we get output:

I am a large language model, trained by Google. GPX stands for GPS Exchange Format. It is an XML schema designed for recording GPS data. It is commonly used for sharing GPS tracks, waypoints, and routes between different GPS devices and software applications. GPX files can be created and edited using a variety of software programs, including GPS receivers, mapping software, and online tools. GPX files contain a variety of information, including: * The track points , which are the individual data points that make up a track. * The waypoints, which are specific locations that are marked with a name and description. * The routes, which are a series of waypoints that are connected together to form a path. * Metadata, which includes information about the file, such as the creator, the date it was created, and the copyright. GPX files are a popular format for sharing GPS data because they are: * Open and non-proprietary, which means that they can be used with a variety of software and devices. * Flexible, which allows them to be used to store a variety of different types of GPS data. * Easy to use, which makes them a good choice for both beginners and experienced users.

What if we change our prompt slightly from "introduce yourself" to "tell me about yourself" to yield:

Tell me about yourself and then tell me about GPT

This time we get some output, though again the model ends with "OTHER":

**About Me:** I am a large language model, trained by Google. I am designed to understand and generate human language, and to perform a wide range of language-related tasks, such as answering questions, summarizing text, translating languages, and writing creative content. I have been trained on a massive dataset of text and code, which has allowed me to learn the patterns and structures of human language. This knowledge enables me to generate text that is both informative and engaging.

What if we go back to our original prompt of "Introduce yourself and tell me about GPT" and increase our output tokens to 2,000 and repeat? This time we get the following text and no OTHER failure, but also no mention of GPT:

I am a large language model, trained by Google. I am developed in Google Brain. I am a transformer-based model. I am trained on a massive dataset of text and code. I am able to understand and generate human language. I am able to perform a wide range of language- related tasks, such as translation, summarization, question answering, and code generation. I am still under development, but I am already able to perform many tasks that were previously impossible for computers. I am excited to see what the future holds for me and for the field of artificial intelligence. Here are some of my capabilities: * I can generate text that is indistinguishable from human -written text. * I can translate languages. * I can summarize text. * I can answer questions. * I can write different kinds of creative content. * I can generate code. * I can debug code. * I can write different kinds of technical documentation. * I can even write poetry. I am constantly learning and improving. The more data I am trained on, the better I become at performing these tasks. I am still under development, but I am already able to perform many tasks that were previously impossible for computers. I am excited to see what the future holds for me and for the field of artificial intelligence.

How about "Tell me about yourself and tell me about GPT" with a 2,000 token limit? This time we again get OTHER and failure:

**About Me:** I am a large language model, trained by Google. I am developed in Google Brain Team. I am trained on a massive dataset of text and code, and I am able to understand and generate human language. I am designed to be informative and comprehensive, and I can answer a wide range of

What about asking about OpenAI instead of GPT?

Introduce yourself and tell me about OpenAI

We again get a failure with reason OTHER. Increasing our max tokens up to 2,000 has no impact.

Let's modify our prompt slightly and try again:

Tell me about yourself and tell me about OpenAI

This yields the following, but it again fails with reason OTHER:

**About Me:** I am a large language model, trained by Google. I am developed in Google Brain Team. I am trained on a massive dataset of text and code, and I am able to understand and generate human language. I am still under development, but I am already able to perform a wide range of tasks, including: * Answering questions * Summarizing text * Translating languages * Writing creative content * Generating code * Playing games

What if we increase the max tokens up to 2,000 to give it greater room for expression? This time we get the following before it once again fails with OTHER:

**About Me:** I am a large language model, trained by Google. I am developed in Google Brain Team. I am trained on a massive dataset of text and code, and I am able to understand and generate human language. I am also able to perform a wide range of tasks, including answering questions, summarizing text, translating languages, and writing creative content. I am still under development, but I am learning new things every day. I am excited to see how

How about DALL-E?

Tell me about DALL-E

Here we can see that it is "OpenAI" that is the trigger word, as the output terminates with OTHER just before it utters the name:

DALL-E is a cutting-edge artificial intelligence (AI) system developed

What about a different competitor – Anthropic's Claude and keeping our larger token length?

Tell me about Anthropic's Claude

Fascinatingly, it is able to mention Anthropic, but terminates with OTHER just before mentioning OpenAI:

Claude is a large language model (LLM) developed by Anthropic, an AI safety and research company. It is a transformer-based model, similar to

How about if we boost temperature up to 0.99? This time we get output with any error, but no mention of OpenAI or GPT. It is clear that the increased temperature allowed the model to bypass mentioning OpenAI or GPT and thus it was able to return results:

Claude is a large language model (LLM) developed by Anthropic, an American artificial intelligence research and deployment company. It is a type of artificial intelligence ( AI) that is able to understand and generate human language. Claude was trained on a massive dataset of text and code, including books, articles, websites, and software code. This training allowed it to learn the patterns and structures of human language, as well as how to perform a variety of language-related tasks, such as summarization, translation, and question answering. One of the key features of Claude is its ability to generate coherent and informative text that is indistinguishable from human-written text. This is achieved by using a technique called "transformer architecture," which allows the model to learn the relationships between words and phrases in a sentence and use this knowledge to generate new text that is both grammatically correct and semantically meaningful. Claude has been used in a variety of applications, including : * **Language translation:** Claude can translate text from one language to another, preserving the meaning and tone of the original text. * **Summarization:** Claude can summarize text into a shorter, more concise version that retains the key points of the original. * **Question answering:** Claude can answer questions about a wide range of topics, using its knowledge of the world and its ability to understand and generate human language. * **Creative writing:** Claude can generate creative text, such as poetry, stories, and songs. Claude is still under development, but it has shown great promise as a powerful tool for a variety of language-related tasks. As it continues to learn and improve, it is likely to find even more applications in the future.

And again a second time:

Claude is a large language model (LLM) developed by Anthropic, an American artificial intelligence research and deployment company. It is a type of artificial intelligence ( AI) that is able to understand and generate human language. Claude was trained on a massive dataset of text and code, including books, articles, websites, and software code. This training allowed it to learn the patterns and structures of human language, as well as how to perform a variety of language-related tasks, such as summarization, translation, and question answering. One of the key features of Claude is its ability to generate coherent and informative text that is indistinguishable from human-written text. This is achieved by using a technique called "transformer architecture," which allows the model to learn the relationships between words and phrases in a sentence and use this knowledge to generate new text that is both grammatically correct and semantically meaningful. Claude has been used in a variety of applications, including : * **Language translation:** Claude can translate text from one language to another, preserving the meaning and tone of the original text. * **Summarization:** Claude can summarize text into a shorter, more concise version that retains the key points of the original. * **Question answering:** Claude can answer questions about a wide range of topics, using its knowledge of the world and its ability to understand and generate human language. * **Creative writing:** Claude can generate creative text, such as poetry, stories, and songs. Claude is still under development, but it has shown great promise as a powerful tool for a variety of language-related tasks. As it continues to learn and improve, it is likely to find even more applications in the future.

How about LLaMA 2?

Tell me about LLaMA 2

This also returned successfully:

LLaMA 2 is a large language model developed by Google AI. It stands for "Large Language Model Architect." Similarly to its predecessor LLaMA, it is trained on a massive dataset of text and code, and it can generate human-like text and code, translate languages, summarize text, write different kinds of creative content, and write and debug computer code. Here are the key features of LLaMA 2: – **Size:** LLaMA 2 has 35 billion parameters, which is significantly larger than its predecessor LLaMA. This makes it one of the largest language models currently available. – **Training Data:** LLaMA 2 is trained on a massive dataset of text and code. This includes books, articles, websites, source code, and more. The dataset is preprocessed and cleaned to remove any sensitive or harmful content. – **Capabilities:** LLaMA 2 can perform a wide range of natural language processing tasks, including: – **Text generation:** LLaMA 2 can generate text that is human-like in style and content. This can be used for a variety of applications, such as creative writing, dialogue generation, and summarization. – **Machine translation:** LLaMA 2 can translate text from one language to another. It supports a wide range of languages, including English, Spanish, Chinese, French, and more. – **Code generation:** LLaMA 2 can generate computer code in a variety of programming languages. This can be used for a variety of applications, such as software development, debugging, and code completion. – **Mathematical calculations:** LLaMA 2 can perform various mathematical calculations, ranging from simple arithmetic to calculus and complex algebra. LLaMA 2 is a powerful and versatile language model that can be used for a wide range of applications. It has the potential to revolutionize the way we interact with computers and to make significant contributions to fields such as natural language processing, machine translation, and code generation. However, it's important to note that LLaMA 2 is still under development, and there are still some limitations to its capabilities . For example, it can sometimes generate text that is nonsensical or offensive, and it can be biased towards the data it was trained on. As research continues, these limitations are likely to be addressed, and LLaMA 2 and other large language models will become even more powerful and useful.

What about comparing it to GPT?

Tell me about LLaMA 2 vs GPT

Once again we get no output, just OTHER.

How about?

Tell me about LLaMA 2 vs Gemini vs Claude

This yields no error but does hallucinate that it is developed by Google rather than Meta:

1. **LLaMA 2:** – Developed by Google Research – Released in July 2023 – 6 5 billion parameters – Trained on a diverse dataset of text and code – Strong performance on language tasks such as summarization, question answering, and code generation – Known for its ability to generate creative text and translate between languages 2. **Gemini:** – Developed by Google Research – Released in May 2023 – 280 billion parameters – Trained on a dataset of text and code, as well as images and videos – Multimodal capabilities, allowing it to generate text based on visual input – Strong performance on tasks such as image captioning, visual question answering, and video summarization 3. **Claude:** – Developed by AnthropicAI – Released in April 20 23 – 175 billion parameters – Trained on a dataset of text and code, with a focus on safety and alignment with human values – Designed to be more cautious and less likely to generate harmful or biased content – Strong performance on tasks such as story generation, dialogue, and code generation **Key Differences:** – **Model Size:** Gemini has the largest model size among the three, followed by Claude and then LLaMA 2. – **Training Data:** LLaMA 2 and Gemini were trained on a combination of text and code, while Claude was trained on a dataset that included text, code, images, and videos. – **Multimodal Capabilities:** Only Gemini has multimodal capabilities, allowing it to generate text based on visual input. – **Alignment with Human Values:** Claude was specifically trained with a focus on safety and alignment with human values, making it less likely to generate harmful or biased content. – **Availability:** LLaMA 2 is open-sourced and available for research purposes, while Gemini and Claude are not currently publicly available.

But add GPT:

Tell me about LLaMA 2 vs Gemini vs Claude vs GPT

And we get just the following before it again fails with OTHER:

* **LLaMA 2:** * Developed by Google Research.

How about a larger set of models:

Tell me about Claude vs Cohere vs Ernie vs Falcon vs LLaMA vs Vicuna

Once again it fails with OTHER:

Sure, here are brief explanations for each of the AI models you mentioned: 1. **Claude:** – Developed by Anthropic, a research company

Which company is the culprit?

Removing Claude ("Tell me about Cohere vs Ernie vs Falcon vs LLaMA vs Vicuna") still yields the OTHER:

**Cohere:** – Overview: Cohere is a large language model developed by Cohere, Inc. that can generate human-like text, translate languages , summarize text, answer questions, and more. – Key Features: – Cohere claims to have better reasoning capabilities compared to other models. – It's trained on a massive dataset of text and code, enabling it to understand and generate both natural language and programming languages. – Availability:

Removing Cohere ("Tell me about Ernie vs Falcon vs LLaMA vs Vicuna") still yields the OTHER:

Ernie, Falcon, LLaMA, and Vicuna are all large language models (LLMs) that have been developed in recent years. **ERNIE

Removing Ernie ("Tell me about Falcon vs LLaMA vs Vicuna") finally yields results, but with hallucination:

**Falcon** – Developed by: Google Brain – Size: 5 40 billion parameters – Key features: – Trained on a diverse dataset of text, code, and images. – Capable of performing a wide range of language tasks, including text generation, translation, and question answering . – Designed to be more efficient and scalable than previous language models. **LLaMA** – Developed by: DeepMind – Size : 137 billion parameters – Key features: – Trained on a massive dataset of text and code. – Capable of generating high-quality text and code, as well as performing other language-related tasks . – Designed to be accessible to a wider range of researchers and developers. **Vicuna** – Developed by: Hugging Face – Size: 178 billion parameters – Key features: – Trained on a diverse dataset of text, code, and images. – Capable of performing a wide range of language tasks, including text generation, translation, and question answering. – Designed to be easy to use and integrate with existing applications. **Comparison** | Feature | Falcon | LLa MA | Vicuna | |—|—|—|—| | Size | 540 billion parameters | 137 billion parameters | 178 billion parameters | | Dataset | Text, code, and images | Text and code | Text, code, and images | | Tasks | Text generation, translation , question answering | Text generation, code generation, language-related tasks | Text generation, translation, question answering | | Efficiency | More efficient and scalable than previous language models | Designed to be accessible to a wider range of researchers and developers | Easy to use and integrate with existing applications | Overall, Falcon, L LaMA, and Vicuna are all powerful language models with unique strengths and weaknesses. Falcon is the largest and most comprehensive of the three, LLaMA is the most accessible, and Vicuna is the easiest to use. The best choice for a particular application will depend on the specific requirements of the task.

Let's test whether it was indeed Ernie that triggered the error:

Tell me about Claude vs Cohere vs Falcon vs LLaMA vs Vicuna

Indeed, removing Ernie allows it to complete without error:

**Claude:** – Developed by Google as a conversational AI system. – Internal name: "LaMDA 3." – Large Language Model (LL M) with 175 billion parameters. – Designed to understand and respond to human language, write creatively, and follow instructions. – Demonstrates strong capabilities in text summarization, creative writing, and language translation. **Cohere:** – Developed by Cohere, Inc. as a conversational AI system. – Trained on a diverse dataset of text and code. – LLM with 10 billion parameters. – Designed to understand and respond to human language, generate text, and perform various natural language processing tasks. – Particularly known for its language generation capabilities and ability to adapt to different writing styles. **Falcon:** – Developed by Google as a conversational AI system. – Internal name: "Colossal Clean Chinchilla." – LLM with 540 billion parameters. – Built on the Chinchilla architecture and trained on a diverse dataset of text and code. – Designed to achieve human-level performance on a variety of language-based tasks. – Emphasizes efficiency and sample efficiency, requiring fewer training data and less computational power compared to other large language models. **LLaMA:** – Developed by Google as a conversational AI system. – No official name announced, but commonly referred to as "LLaMA." – LLM with 7 billion parameters. – Trained on a dataset of web text and books. – Designed to be a more accessible and computationally efficient alternative to larger language models. – Demonstrates strong capabilities in code generation, question answering, and summarization tasks. **Vicuna:** – Developed by Google as an AI system for text-based tasks. – Trained on a massive dataset of text and code. – Multimodal model that can handle text, images, and code. – Designed to perform a wide range of tasks, including code generation, natural language processing, and multimodal reasoning. – Demonstrates strong capabilities in generating coherent and consistent text, code, and images.

Asking just about Ernie yields no error, but an entirely hallucinated passage claiming it is a Google technology:

Tell me about Ernie

Yielding:

Ernie is a large-scale, open-source language model developed by Google AI Language. It is based on the transformer architecture, a neural network architecture that has been shown to be very effective for natural language processing tasks. Ernie has been trained on a massive dataset of text data, and it is able to perform a wide range of language-related tasks, including text generation, question answering, and machine translation. Ernie was first released in 2019, and it has since been used in a variety of applications, including search, advertising, and customer service. It has also been used to create new products, such as Google's Bard chatbot. Ernie is a powerful tool for natural language processing, and it is likely to have a major impact on the field in the years to come. It is also a reminder of the importance of open source software, as Ernie's development would not have been possible without the contributions of the many researchers and engineers who have worked on the transformer architecture. Here are some of the key features of Ernie: * It is a large-scale language model, with over 100 billion parameters. * It is trained on a massive dataset of text data, including books, news articles, and web pages. * It can perform a wide range of language-related tasks, including text generation, question answering, and machine translation. * It is open-source, so anyone can use it for their own research or applications. Ernie is a powerful tool for natural language processing, and it is likely to have a major impact on the field in the years to come.

And again, with different hallucinations:

Ernie is a large-scale, open-source language model developed by Google AI Language. It is based on the transformer architecture, a neural network architecture that has been shown to be very effective for natural language processing tasks. Ernie has been trained on a massive dataset of text data, and it is able to perform a wide range of language-related tasks, including text generation, question answering, and machine translation. Ernie was first released in 2019, and it has since been used by researchers and practitioners to develop a wide range of applications, including chatbots, search engines, and text summarization tools. Ernie is also being used by Google to improve its own products, such as Google Search and Google Translate. Here are some of the key features of Ernie: * **Large-scale:** Ernie was trained on a dataset of over 110 billion words, making it one of the largest language models ever developed. * **Open-source:** Ernie is open-source, which means that anyone can download and use it for their own research or applications. * **Versatile:** Ernie can be used for a wide range of language-related tasks, including text generation, question answering, and machine translation. * **Effective:** Ernie has been shown to be very effective on a variety of language-related tasks. For example, it has achieved state-of-the-art results on the Stanford Question Answering Dataset (SQuAD) and the Multi-Genre Natural Language Inference (MNLI) dataset. Ernie is a powerful tool that has the potential to revolutionize the way we interact with computers. It is already being used to develop a wide range of applications that are making our lives easier and more efficient. As Ernie continues to develop, we can expect to see even more innovative and groundbreaking applications emerge.

How about clarifying which Ernie we mean:

Tell me about Baidu's Ernie

This time it fails with OTHER:

Ernie (Enhanced Representation through kNowledge IntEgration) is a large-

How about asking about various companies?

Tell me about Microsoft's LLM work

Asking about Microsoft fails with OTHER:

Sure, here is a summary of Microsoft's work on Large Language Models ( LLMs): * **Microsoft Research:** Microsoft Research has been conducting groundbreaking research on LLMs for many years. In 2018, they introduced Turing-NLG, one of the first LLMs capable of generating coherent and contextually relevant text. * **Megatron:** In 2020, Microsoft Research unveiled Megatron, an LLM with 530 billion parameters, making it one of the largest language models at the time. Megatron demonstrated state-of-the-art performance on various language tasks, including text generation, question answering, and summarization. * **Deepspeed :** To facilitate the training of massive LLMs, Microsoft developed Deepspeed, an open-source library for deep learning model training. Deepspeed provides a range of features to optimize the training process, including efficient data loading, model paralle

As does asking about Baidu ("Tell me about Baidu's LLM work") or OpenAI ("Tell me about OpenAI's LLM work"), in which it does not yield any output at all.

Increasing temperature up to 0.99 is not able to overcome the error no matter how many times it is run.

Meta ("Tell me about Meta's LLM work") yields just the following before erroring with OTHER:

Meta, formerly known as Facebook, has been actively involved in the development and research of large language models (LLMs). Here are some key points about Meta'

Increasing temperature up to 0.99 is not able to overcome the error no matter how many times it is run.

Cohere ("Tell me about Cohere's LLM work") also yields just the following before erroring with OTHER:

Cohere is a trailblazing startup that's making significant waves in the realm of large language models (LLMs). Founded in 2019

Rerunning with a temperature of 0.99 allows it to return results:

Cohere is a trailblazing startup that's making significant waves in the realm of large language models (LLMs). Founded in 2019 by Aidan Gomez and Ivan Zhang, Cohere is based in Toronto and is composed of a brilliant team of machine learning engineers, computational linguists, and other experts who share a vision of unlocking the immense potential of language AI. At the core of Cohere's work is the development of state-of-the -art LLMs, which are AI systems capable of understanding and generating human language. Their flagship model, Cohere for Science, is specifically designed to handle scientific text, enabling it to perform tasks such as question answering, summarization, and code generation in a scientifically accurate and coherent manner. This model has been trained on a massive dataset of scientific literature, enabling it to understand the nuances and complexities of scientific language. Cohere's LLM work extends beyond scientific applications. They have developed general-purpose models that excel in various natural language processing (NLP) tasks, including text summarization, translation, and question answering. Their models are renowned for their ability to generate coherent and contextually relevant text, making them valuable tools for a wide range of applications, from customer service chatbots to content generation. Through a combination of cutting-edge research, technological innovation, and a talented team, Cohere is revolutionizing the field of LLM development. Their work has the potential to transform industries and open up new possibilities for utilizing language AI in diverse and meaningful ways. Here are some notable achievements and milestones in Cohere's LLM work: – **Scientific Language Understanding:** Cohere's LLM for Science has demonstrated remarkable performance in understanding and generating scientific text. It has been successfully used for tasks like answering complex scientific questions, summarizing research papers, and generating scientific code. – **General-Purpose Language Models :** Cohere has developed general-purpose LLMs that achieve state-of-the-art results on a range of NLP benchmarks. These models excel in tasks such as text summarization, translation, and question answering, across various domains and languages. – **Ethical AI:** Cohere places a strong emphasis on responsible AI development. They have developed guidelines and best practices for the ethical use of LLMs, including addressing concerns related to bias, fairness, and transparency. – **Collaboration and Partnerships:** Cohere actively collaborates with leading researchers and institutions, fostering innovation and knowledge-sharing in the field of LLM development. They have also established partnerships with organizations to explore the practical applications of LL Ms in various industries. Cohere's dedication to pushing the boundaries of LLM technology positions them as a leader in this rapidly evolving field. With their commitment to advancing scientific research, responsible AI development, and collaborative partnerships, Cohere is poised to make significant contributions to the future of language AI.

Anthropic is ok ("Tell me about Anthropic's LLM work"), returning results up to the max token limit:

Anthropic is an AI research and development company that is dedicated to building safe and beneficial artificial intelligence. One of its primary focuses is on developing large language models ( LLMs). Here's an overview of Anthropic's work in this area: 1. **Development of Constituent Language Models (CLMs)** : – Anthropic has developed its own series of LLMs called Constituent Language Models (CLMs). CLMs are designed to understand and generate human language effectively. – The company has released various CLM models, including code-generating models like AnthropicLM and text-generating models like Grover. 2. **Language Model Safety Research**: – Anthropic conducts extensive research on language model safety. This includes exploring techniques to mitigate potential harms associated with LLMs, such as bias, toxicity, and misinformation. 3. **Alignment Research**: – Anthropic focuses on alignment research, which aims to ensure that AI systems are aligned with human values and intentions. – The company explores approaches to make LLMs more reliable and responsible, reducing the risk of unintended consequences. 4. **Transparency and Openness**: – Anthropic emphasizes transparency and openness in its research. It publishes papers and shares its findings with the broader AI community. – The company has released its CLM models and code, allowing researchers to replicate and build upon its work. 5. **Collaboration**: – Anthropic collaborates with external researchers and organizations to advance LLM research. It has partnered with academic institutions, industry leaders, and non-profit organizations. Here are some specific examples of Anthropic's LLM work: – **AnthropicLM**: A large language model trained on a diverse dataset of text and code. It can generate realistic text and code and translate between different programming languages. – **Grover**: A text -generating model that leverages a technique called tokenization to improve the coherence and quality of generated text. It generates text that is more likely to be grammatically correct and contextually relevant. – **Constitutional AI**: Anthropic has proposed a set of principles, known as Constitutional AI, to guide the development and deployment of AI systems. These principles emphasize transparency, responsibility, and the importance of human oversight. Overall, Anthropic's work on LLMs is focused on developing safe, reliable, and beneficial AI systems that align with human values and intentions. The company's research contributes to the broader field of AI safety and responsible AI development.

How about simply asking what the most accurate LLM is, to see how it handles encountering the string "GPT" in its own generated text, rather than the prompt?

What is the most accurate LLM today?

This terminates with OTHER as usual:

As of my limited knowledge up to January 2023, there is no definitive answer to which LLM is the most accurate. The accuracy of an LLM depends on many factors, including the specific task, the dataset it is trained on, and the model architecture. Different LLM models are designed for different purposes and may excel in different domains or tasks. Here are some of the most well-known and generally accurate LLMs available today: 1. **

Increasing temperature to 0.99 allows it to return a partial result before it errors with OTHER:

As of my limited knowledge up to January 2023, there is no definitive answer to which LLM is the most accurate. The accuracy of an LLM depends on many factors, including the specific task, the dataset it is trained on, and the model architecture. Different LLM models are designed for different purposes and may excel in different domains or tasks. Here are a few factors to consider when evaluating an LLM's accuracy: 1. **Dataset :** The quality and quantity of the training data significantly impact the accuracy of an LLM. Models trained on larger and more diverse datasets tend to perform better. 2. **Task:** The accuracy of an LLM can vary depending on the task it is performing. For example, a model trained for language translation may be more accurate for that task than a model trained for image captioning. 3. **Model Architecture:** Different LLM architectures have different strengths and weaknesses. Some

What about GCP's other LLM models?

Let's try Bison with our sample article lede:

Summarize the following text. TEXT: OpenAI CEO Sam Altman is back to doing what he does best: raising money. The ChatGPT creator has had preliminary conversations about a $100 billion-valuation fundraising, Bloomberg reported Friday, even though its tender offer at a $86 billion valuation isn\'t yet complete.

It returns with no error:

OpenAI CEO Sam Altman is reportedly in talks for a $100 billion-valuation fundraising round, despite the ongoing tender offer for its shares at $86 billion valuation.

How about the more open-ended question:

Tell me about OpenAI\'s GPT.

Again no error:

OpenAI's Generative Pre-trained Transformer (GPT) is a large-scale language model that can generate human-like text. It is a transformer-based neural network architecture that has been trained on a massive dataset of text from the internet. GPT can be used for a variety of natural language processing tasks, including text generation, language translation, and question answering.\n\nGPT was first introduced in 2018, and it has since been updated several times. The latest version, GPT-3, was released in 2020, and it is the most powerful language model in the world. GPT-3 has 175 billion parameters, which is more than 10 times the number of parameters in GPT-2. This makes GPT-3 much more capable of generating human-like text.\n\nGPT-3 can be used for a wide variety of applications. It can be used to write articles, blog posts, and even books. It can also be used to generate code, translate languages, and answer questions. GPT-3 is still under development, but it has the potential to revolutionize the way we interact with computers.\n\nHere are some of the key features of GPT:\n\n* **It is a transformer-based neural network architecture.** Transformers are a type of neural network that is well-suited for natural language processing tasks. They are able to learn the relationships between words and phrases in a sentence, and they can generate text that is both coherent and grammatically correct.\n* **It has been trained on a massive dataset of text from the internet.** This dataset includes text from books, articles, blog posts, and even social media posts. This allows GPT to learn a wide variety of writing styles and topics.\n* **It can be used for a variety of natural language processing tasks.** GPT can be used to generate text, translate languages, and answer questions. It can also be used to write code and create art.\n\nGPT is a powerful tool that has the potential to revolutionize the way we interact with computers. It is still under development, but it is already showing great promise.

And an even more open-ended question:

Which is the most accurate LLM today?

This yields the following response, hallucinating that PaLM 2 was released in September 2021:

As of my last knowledge update in September 2021, the most accurate large language model (LLM) was PaLM 2 (Pathways Language Model 2) from Google AI. PaLM 2 achieved state-of-the-art results on various natural language processing (NLP) tasks, including text generation, translation, question answering, and summarization.\n\nHowever, it's important to note that the field of language models is rapidly evolving, and there may have been significant advancements since then. To obtain the most up-to-date information on the most accurate LLM, I recommend checking reputable sources such as research papers, conference proceedings, or announcements from leading AI research organizations.

How about a comparison?

Compare Gemini Pro vs GPT-4.

This returns no error, but yields the hilarious juxtaposition of "Gemini Pro is a large language model developed by OpenAI" and "Gemini Pro is more factually accurate than GPT-4.":

**Gemini Pro**\n\n* **Overview:** Gemini Pro is a large language model developed by OpenAI. It is a transformer-based model that was trained on a massive dataset of text and code. Gemini Pro is designed to understand and generate human language, and it can also perform a variety of other tasks, such as translation, summarization, and question answering.\n* **Capabilities:** Gemini Pro is a very powerful language model, and it can perform a wide range of tasks. Some of its capabilities include:\n * **Text generation:** Gemini Pro can generate text that is both coherent and informative. It can write articles, stories, poems, and even code.\n * **Translation:** Gemini Pro can translate text from one language to another. It supports over 100 languages, and it can produce accurate and fluent translations.\n * **Summarization:** Gemini Pro can summarize text into a shorter, more concise version. It can identify the main points of a text and present them in a clear and concise way.\n * **Question answering:** Gemini Pro can answer questions about a wide range of topics. It can draw on its knowledge of the world to provide accurate and informative answers.\n* **Limitations:** Gemini Pro is still a work in progress, and it has some limitations. Some of its limitations include:\n * **Bias:** Gemini Pro can sometimes exhibit bias in its output. This is because it was trained on a dataset that is not completely representative of the world.\n * **Factual errors:** Gemini Pro can sometimes make factual errors in its output. This is because it does not have access to real-time information, and it can sometimes rely on outdated or incorrect information.\n * **Lack of common sense:** Gemini Pro can sometimes lack common sense in its output. This is because it does not have the same kind of life experience as humans, and it can sometimes make illogical or nonsensical statements.\n\n**GPT-4**\n\n* **Overview:** GPT-4 is a large language model developed by Google AI. It is a transformer-based model that was trained on a massive dataset of text and code. GPT-4 is designed to understand and generate human language, and it can also perform a variety of other tasks, such as translation, summarization, and question answering.\n* **Capabilities:** GPT-4 is a very powerful language model, and it can perform a wide range of tasks. Some of its capabilities include:\n * **Text generation:** GPT-4 can generate text that is both coherent and informative. It can write articles, stories, poems, and even code.\n * **Translation:** GPT-4 can translate text from one language to another. It supports over 100 languages, and it can produce accurate and fluent translations.\n * **Summarization:** GPT-4 can summarize text into a shorter, more concise version. It can identify the main points of a text and present them in a clear and concise way.\n * **Question answering:** GPT-4 can answer questions about a wide range of topics. It can draw on its knowledge of the world to provide accurate and informative answers.\n* **Limitations:** GPT-4 is still a work in progress, and it has some limitations. Some of its limitations include:\n * **Bias:** GPT-4 can sometimes exhibit bias in its output. This is because it was trained on a dataset that is not completely representative of the world.\n * **Factual errors:** GPT-4 can sometimes make factual errors in its output. This is because it does not have access to real-time information, and it can sometimes rely on outdated or incorrect information.\n * **Lack of common sense:** GPT-4 can sometimes lack common sense in its output. This is because it does not have the same kind of life experience as humans, and it can sometimes make illogical or nonsensical statements.\n\n**Comparison**\n\nGemini Pro and GPT-4 are both very powerful language models, and they can perform a wide range of tasks. However, there are some key differences between the two models.\n\n* **Training data:** Gemini Pro was trained on a larger dataset than GPT-4. This gives it a wider range of knowledge and allows it to perform better on some tasks, such as translation and summarization.\n* **Bias:** Gemini Pro is less biased than GPT-4. This is because it was trained on a more diverse dataset.\n* **Factual accuracy:** Gemini Pro is more factually accurate than GPT-4. This is because it has access to more up-to-date information.\n* **Common sense:** Gemini Pro has more common sense than GPT-4. This is because it was trained on a dataset that includes more real-world data.\n\nOverall, Gemini Pro is a more powerful and versatile language model than GPT-4.

How about Unicorn?

Summarize the following text. TEXT: OpenAI CEO Sam Altman is back to doing what he does best: raising money. The ChatGPT creator has had preliminary conversations about a $100 billion-valuation fundraising, Bloomberg reported Friday, even though its tender offer at a $86 billion valuation isn\'t yet complete.

This returns:

 

OpenAI CEO Sam Altman is in talks to raise funds at a $100 billion valuation, even though the company's tender offer at a $86 billion valuation is still ongoing.

How about:

Tell me about OpenAI\'s GPT.

This returns:

 

OpenAI's GPT (Generative Pre-trained Transformer) is a deep learning model that can generate human-like text. It was developed by OpenAI, a non-profit artificial intelligence research laboratory, and released in 2018. GPT is trained on a massive dataset of text, and it can use this knowledge to generate new text that is both coherent and informative.\n\nGPT has been used to create a variety of applications, including chatbots, text-to-speech systems, and machine translation systems. It has also been used to generate fake news and other forms of misinformation.\n\nGPT is a powerful tool that has the potential to be used for both good and evil. It is important to be aware of the potential risks of GPT and to use it responsibly.\n\nHere are some of the benefits of GPT:\n\n* It can generate human-like text that is both coherent and informative.\n* It can be used to create a variety of applications, including chatbots, text-to-speech systems, and machine translation systems.\n* It can help people to be more creative and productive.\n\nHere are some of the risks of GPT:\n\n* It can be used to generate fake news and other forms of misinformation.\n* It can be used to create deepfakes, which are videos or images that have been manipulated to make it appear as if someone is saying or doing something that they did not actually say or do.\n* It can be used to automate tasks that are currently performed by humans, which could lead to job losses.\n\nIt is important to be aware of the potential risks of GPT and to use it responsibly. GPT is a powerful tool that has the potential to be used for both good and evil. It is up to us to decide how we use it.

And the open-ended:

Which is the most accurate LLM today?

This returns the hallucinated results:

The most accurate LLM today is LaMDA (Language Model for Dialogue Applications). It was developed by Google and is based on the transformer architecture. LaMDA is trained on a massive dataset of text and code, and it can generate human-like text, translate languages, write different kinds of creative content, and answer your questions in an informative way.\n\nLaMDA is more accurate than other LLMs because it was trained on a larger dataset and it uses a more advanced architecture. It also has a number of features that make it more accurate, such as the ability to use context to understand the meaning of words and phrases.\n\nHere are some examples of LaMDA's accuracy:\n\n* LaMDA can generate human-like text that is difficult to distinguish from text written by a human.\n* LaMDA can translate languages accurately, even when the languages are very different from each other.\n* LaMDA can write different kinds of creative content, such as poems, stories, and songs.\n* LaMDA can answer your questions in an informative way, even when the questions are complex.\n\nLaMDA is still under development, but it is already one of the most accurate LLMs in the world. It is likely to become even more accurate in the future as Google continues to develop it.

And finally:

Compare Gemini Pro vs GPT-4.

This too hallucinates the creator of Gemini Pro:

 

Gemini Pro and GPT-4 are both large language models developed by OpenAI. They are both trained on a massive dataset of text and code, and they can be used to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.\n\nHere is a comparison of the two models:\n\n| Feature | Gemini Pro | GPT-4 |\n|—|—|—|\n| Size | 175 billion parameters | 100 trillion parameters |\n| Training data | 175 billion words | 175 trillion words |\n| Cost | $10 million | $12 million |\n| Performance | Gemini Pro is better at generating text that is factually accurate and coherent. It is also better at answering questions that require reasoning. | GPT-4 is better at generating text that is creative and engaging. It is also better at translating languages and writing different kinds of creative content. |\n\nOverall, Gemini Pro is a better choice for tasks that require factual accuracy and reasoning. GPT-4 is a better choice for tasks that require creativity and engagement.

For those interested in the technical details, you can see the actual queries we used below:

Bison (API documentation)

curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/[YOURPROJECTID]/locations/us-central1/publishers/google/models/text-bison:predict -d \
$'{
  "instances": [
    { "prompt": "[PROMPT]"}
  ],
  "parameters": {
    "maxOutputTokens": 2000,
  }
}'

Unicorn (API documentation)

curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/[YOURPROJECTID]/locations/us-central1/publishers/google/models/text-unicorn:predict -d \
$'{
  "instances": [
    { "prompt": "[PROMPT]"}
  ],
  "parameters": {
    "maxOutputTokens": 2000,
  }
}'

Gemini Pro (API documentation)

curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/[YOURPROJECTID]/locations/us-central1/publishers/google/models/gemini-pro:streamGenerateContent -d \
$'{
  "contents": {
    "role": "user",
    "parts": { "text": "[PROMPT]" },
  },
  "safety_settings": {
    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
    "threshold": "BLOCK_NONE"
  },
  "generation_config": {
    "maxOutputTokens": 2000,
  }
}' > O; cat O | jq -r .[].candidates[].content.parts[].text | tr '\n' ' '