Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


AI technology is exploding, and industries are racing to adopt it as fast as possible. Before your enterprise dives headfirst into a confusing sea of opportunity, it’s important to explore how generative AI works, what red flags enterprises need to consider, and how to evolve into an AI-ready enterprise.

How generative AI actually works

One of the most common and powerful techniques for generative AI is large language models (LLMs), such as GPT-4 or Google’s BARD. These are neural networks that are trained on vast amounts of text data from various sources such as books, websites, social media and news articles. They learn the patterns and probabilities of language by guessing the next word in a sequence of words. For example, given the input “The sky is,” the model might predict “blue,” “clear,” “cloudy” or “falling.”

By using different inputs and parameters, LLMs can generate different types of outputs such as summaries, headlines, stories, essays, reviews, captions, slogans or code. For example, given the input, “write a catchy slogan for a new brand of toothpaste,” the model might generate “smile with confidence,” “brush away your worries,” “the toothpaste that cares” or “sparkle like a star.”

Red flags enterprises need to consider when using generative AI

While generative AI can offer many benefits and opportunities for enterprises, it also comes with some drawbacks that must be addressed. Here are some of the red flags that enterprises need to consider before adopting generative AI.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

Public vs. private information

As employees begin to experiment with generative AI, they will be creating prompts, generating text and building this new technology into their workflow. It is essential to have clear policies that delineate information that is cleared for the public versus private or proprietary information. Submitting private information, even in an AI prompt, means that information is no longer private. Begin the conversation early to ensure teams can use generative AI without compromising proprietary information.

AI hallucinations

Generative AI models are not perfect and may sometimes produce outputs that are inaccurate, irrelevant or nonsensical. These outputs are often referred to as AI hallucinations or artifacts. They may result from various factors such as insufficient data quality or quantity, model bias or errors or malicious manipulation. For example, a generative AI model may generate a fake news article that spreads misinformation or propaganda. Therefore, enterprises need to be aware of the limitations and uncertainties of generative AI models and verify their outputs before using them for decision making or communication.

Using the wrong tool for the job

Generative AI models are not necessarily one-size-fits-all solutions that can solve any problem or task. While some models prioritize generalized responses and a chat-based interface, others are built for specific purposes. In other words, some models may be better at generating short texts than long texts; some may be better at generating factual texts than creative texts; some may be better at generating texts in one domain than another domain.

Many generative AI platforms can be further trained for a specific niche like customer support, medical applications, marketing or software development. It’s easy to simply use the most popular product, even if it isn’t the right tool for the job at hand. Enterprises need to understand their goals and requirements and choose the right tool for the job.

Garbage in; garbage out

Generative AI models are only as good as the data they are trained on. If the data is noisy, incomplete, inconsistent or biased, the model will likely produce outputs that reflect these flaws. For example, a generative AI model trained on inappropriate or biased data may generate texts that are discriminatory and could damage your brand’s reputation. Therefore, enterprises need to ensure that they have high-quality data that is representative, diverse and unbiased.

How to evolve into an AI-ready enterprise

Adopting generative AI is not a simple or straightforward process. It requires a strategic vision, a cultural shift and a technical transformation. Here are some of the steps that enterprises need to take to evolve into an AI-ready enterprise.

Find the right tools

As noted above, generative AI models are not interchangeable or universal. They have different capabilities and limitations depending on their architecture, training data and parameters. Therefore, enterprises need to find the right tools that match their needs and objectives. For example, an AI platform that creates images — like DALL-E or Stable Diffusion — probably wouldn’t be the best choice for a customer support team. 

Platforms are emerging that specialize their interface for specific roles: copywriting platforms optimized for marketing results, chatbots optimized for general tasks and problem solving, developer-specific tools that connect with programming databases, medical diagnosis tools and more. Enterprises need to evaluate the performance and quality of the generative AI models they use, and compare them with alternative solutions or human experts.

Manage your brand

Every enterprise must also think about control mechanisms. Where, say, a marketing team may have historically been the gatekeepers for brand messaging, they were also a bottleneck. With the ability for anyone across the organization to generate copy, it’s important to find tools that allow you to build in your brand guidelines, messaging, audiences and brand voice. Having AI that incorporates brand standards is essential to remove the bottleneck for on-brand copy without inviting chaos. 

Cultivate the right skills

Generative AI models are not magic boxes that can generate perfect texts without any human input or guidance. They require human skills and expertise to use them effectively and responsibly. One of the most important skills for generative AI is prompt engineering: the art and science of designing inputs and parameters that elicit the desired outputs from the models.

Prompt engineering involves understanding the logic and behavior of the models, crafting clear and specific instructions, providing relevant examples and feedback, and testing and refining the outputs. Prompt engineering is a skill that can be learned and improved over time by anyone who works with generative AI.

Establish new roles and workflows

Generative AI models are not standalone tools that can operate in isolation or replace human workers. They are collaborative tools that can augment and enhance human creativity and productivity. Therefore, enterprises need to establish new workflows that integrate generative AI models with human teams and processes. 

Enterprises may need to create entirely new roles or functions, such as AI ombudsman or AI-QA specialist, who can oversee and monitor the use and output of generative AI models and address problems when they arise. They may also need to implement new policies or protocols — such as ethical guidelines or quality standards — that can ensure the accountability and transparency of generative AI models.

Generative AI is no longer on the horizon; it has arrived

Generative AI is one of the most exciting and disruptive technologies of our time. It has the potential to transform how we create and consume content in various domains and industries. However, adopting generative AI is not a trivial or risk-free endeavor. It requires careful planning, preparation, and execution. Enterprises that embrace and master generative AI will gain a competitive edge and create new opportunities for growth and innovation.

Yaniv Makover is the CEO and cofounder of Anyword.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers