Debunking 5 Common Misconceptions Among Enterprises About Generative AI


Generative AI is a type of artificial intelligence (AI) that can interact with users in spoken language and produce data content such as article outlines, reports and other text output, including multimodal content such as images, video and audio. It's hard to think of a time when there has been such intense interest from enterprises in new technologies at such a rapid pace – as in generative AI today.


Since Google Cloud unveiled its latest generative AI capabilities, we've been invited to many meetings with local organizations to discuss how they can incorporate the technology into their businesses.


Generative AI is much more approachable than previous generations of AI, and everyone is excited about the range of abilities it offers. However, some of this excitement is unfounded. So in this sharing, we will debunk misconceptions about this generative AI arena. Here are five misconceptions that companies often expect when they want to join this arena:


Misconception 1: One model fits all


The statement that a single language model (LLM, or large language model) or one type of generative AI model will cover all use cases is just a myth. The nature of generative AI, especially for enterprises, means that we will see thousands of different speech models depending on the use.


It is clear that some models are good at summary tasks, some at bulleted list tasks, and some are good at reasoning, and so on. Industries, businesses and corporations have very different editorial tones for the expression of knowledge. All of this should be considered when choosing your language model.



Misconception 2: Bigger is better


Generative AI models use a large amount of computing resources. Large funding funds will be required for companies to produce this basic model.


High costs are one of the reasons why using the right model for a task is so important.


For example, the model your company uses doesn't need to know every word to every Taylor Swift song to generate a summary report on next quarter's sales goals. Context is important and you need to be selective about how much IQ a model requires in your use case.


Misconception 3: Just me and my AI


Just as the "bring your own device" (BYOD) and "bring your own application" (BYOA) movements have previously raised concerns about "shadow IT". Some financial institutions have also closed access to publicly available generative AI out of concern that the models could leak internal company information.


Some public generative AI services may leverage user data for future training sets, potentially exposing internal company data. Let's say a bank is exploring a merger for a large industrial client, and someone in the mergers and acquisitions (M&A) department consults a public model and asks: “What are some good acquisition targets for Company XYZ?”. If that information contributes to the model's training data, the service will likely provide the same answer to this question, no matter who asks. By default, Google Cloud AI services do not use private data in this way.


Most of the companies we met were concerned about the security of the questions they asked the models, the content the models were trained on, and the output of the models – and rightly so.


Incorrect 4: No questions will be asked


The accuracy and reliability of generative AI is one of the biggest topics regarding this new technology. An algorithm has been designed to provide an answer no matter what, and in some cases, generative AI models can produce untrue and less accurate answers.


Every company we know has invested deeply in creating verifiable facts and data. It is important for enterprises to use models and technology architectures that are based on the facts of their data.


Most public generative AI models are indifferent to meeting these enterprise data needs. It is important, especially in public sector organizations and companies in regulated industries, to avoid taking any risks.


Incorrect Response 5: Ask me any questions?


Enterprises have various sources of information: prices, human resources, legal, financial and so on, but we have never heard of any company that allows open access to all this information.


Some business leaders are increasingly interested in building all their information into an LLM, so that it can potentially answer all questions, whether at an organizational or global level.


After a company figured out how they could keep their information private and factual, they quickly realized the next step: How can I manage who can ask questions about my model, and at what level?


The way forward with generative AI


At Google Cloud, we continue to build on our deep experience and expertise in AI, and we are committed to working with the wider industry to develop enterprise-ready AI innovations that are accessible, trusted and accountable.


As enterprises explore how generative AI can help them drive positive business outcomes, it's important to understand these misconceptions to separate hype from reality, and find the right partners to use this technology in a way that preserves security and privacy. their data, applications and users.

Previous Post Next Post

Contact Form