Add AI features to your application with Genkit

This article is adapted from my session at Google Developer Group’s Build With AI event in the University of Ibadan where I introduced attendees to Genkit and talked on some of the possibilities of building with Genkit.

In this article we talk about the following:

  • What is Genkit?
  • Getting Started
  • Workflows
  • Developer UI
  • Tool Calling and RAG
  • Deployments
  • Real Life Challenge For You To Try

What is Genkit?

Genkit is an open source framework that allows developers to build AI powered applications. It does this by providing developers an SDK complete with testing and debugging tools that empowers you to interact with any AI model from supported providers.

Typically to use ChatGPT in your application, you would have to read through the relevant documentation, create an api key, and connect to the relevant endpoints for what you need to get done. If you wanted to connect to ChatGPT and Claude you’d do the same process for both. What about ChatGPT, Claude and Gemini?

I think you can already see where I’m going with this. Genkit unifies this experience by providing you a centralised interface to interact with as many of these models as you have a use case for.

Right now, Genkit has a stable release for everything Node.js but at the time I’m writing this, it’s in beta for Go and in alpha for Python.

Genkit is very powerful and we would not touch on every single thing it can do but I’ll do my best to give you a pretty solid idea. Coming up is an actual application I built from scratch and integrated with Genkit.

Getting Started

For this article I would be using Javascript. You can setup a simple application with a package.json and one javascript file. To get started with Genkit in your javascript based application, you first of all need to install it with the following command

npm install genkit @genkit-ai/googleai

The main engine you need is genkit. We installed @genkit-ai/googleai as well because we want to use Gemini. If we wanted to use ChatGPT we would have installed genkitx-openai. So whatever provider you want to use, you can find the correct package to install (or maybe write your own plugin 😀 – yes Genkit can be extended).

Then in your javascript file you can import the genkit engine and the provider you want to use and then register the provider and the models with the engine. It looks something like this

import { genkit } from "genkit";
import { googleAI } from "@genkit-ai/googleai";

const ai = genkit({
      plugins: [googleAI()],
      model: googleAI.model('gemini-2.0-flash'),
});

The provider is basically the company that created the different models, like Google, OpenAI, etc. Then model examples include gemini 2.5 pro, gemini 2.0 flash (like in this case), o3 mini, etc.

The most relatable use case I can think of is generating text from a prompt so lets see how that can be done. Building on our setup we can add the following

async function main() {
      const result = await ai.generate({
                prompt: "Hello, Gemini! You're live at Build with AI at Opolo Hub University of Ibadan.",
  });
      console.log(result.text);
}

main();

Now when we run this code by simply doing node index.js or npm run dev. We can see something like the following output

Hello! It's fantastic to be here, live at Build with AI at Opolo Hub, University of Ibadan! The energy from Nigeria's tech community is always inspiring.

What can I help you with today? Are you looking to dive into some AI concepts, discuss project ideas, or perhaps brainstorm some innovative applications? Let's build something amazing together!

You also have the option of streaming this text to the user so the responses come out in chunks just like it happens on the websites for these models.

Right off the bat, you can see how much simpler this is. If you wanted to get a response from ChatGPT instead you just swap out the models. You can even register multiple models and specify the model you want to use in the generate function for each function call.

NOTE: You’ll need API keys in the .env file for each provider. Check the Genkit documentation for the exact key names (e.g., GEMINI_API_KEY for Google AI).

According to the docs there are names you should call each api key for each provider so do well to check that out.

Workflows

Genkit also allows you to define pretty complex workflows. In order to get how this works, think of AI agents or N8N automations. Basically, the result of one task cascades into another task as its input, and after a number of flows, you get an output.

For example, imagine a recipe generator: a user uploads an image of a meal, a model analyzes ingredients, and another crafts a step-by-step recipe—perfect for food app developers. In order to use this feature, you first need to define the workflow. You can define workflows one by one and call them in order, or tie them into one giant workflow executed at once.

In defining workflows, you might want to use an input and output schema, like an API response, so each workflow knows what it’s handling. Genkit supports this with Zod. Let’s define a simple workflow that suggests a social game for an event based on its theme. Add this to your code:

import { genkit, z } from 'genkit'; // z here comes from zod

// Define the workflow
const eventSuggestionFlow = ai.defineFlow(
  {
    name: 'eventSuggestionFlow',
    inputSchema: z.object({ theme: z.string() }),
    outputSchema: z.object({ eventItem: z.string() }),
  },
  async ({ theme }) => {
    const { text } = await ai.generate({
      model: googleAI.model('gemini-2.0-flash'),
      prompt: `Give a suggestion for a playable game that can be done at a ${theme} themed event.`,
    });
    return { eventItem: text };
  }
);

// Call the workflow
const { eventItem } = await eventSuggestionFlow({ theme: 'Halloween' });
console.log(eventItem);

Of course, in your app, you could display this to users or send it via email, depending on your needs.

Developer UI

The Developer UI is like the web browser console for your AI workflows. You can visualise in realtime all of the steps and processes that happen in the workflow. You can pause it, debug any inputs or outputs you observe from any workflow etc. Its a versatile observability tool for everything you have used Genkit to do in your application.

In order to use it you would have to install the genkit cli with the following command

npm install -g genkit-cli

Then you can start the Developer UI alongside your code with

npx genkit start -- npm run dev

Run

npx genkit start -- node index.js

if no dev script is defined.

if you just want to start the Developer UI alone you can do

npx genkit start

Since genkit-cli is installed globally, you don’t need npx so try it without it first.

You can also run the workflows in your code individually straight from the cli. Let’s run our eventSuggestionFlow for example

npx genkit flow:run eventSuggestionFlow '{"theme": "Workplace"}'

Tool Calling and RAG

In my opinion these are the most exciting features of genkit and coupled with workflows you are literally unstoppable. Genkit allows you to call external tools like other existing apis in order to fetch data to be processed from the user prompt and the given context. It can also interact with documents for RAG purposes by providing abstractions for Indexers, Embedders and Retrievers.

Deployments

Aside integrating your workflow in an existing app or building an app around your AI functions, you can also deploy your workflow as a standalone agent on Firebase or on any platform that supports an Node/Express app deployment. Then in your client application you can consume that workflow kind of like an api with the runflow function imported from “genkit/beta/client”.

Real Life Challenge For You To Try

In order to test Genkit AI in a real life application we would be building Chatty.

Note: This project is adapted from an existing chat application to be used as a practical base for educational purposes. I extend my gratitude to the original creator for their work, which provides a solid foundation for further development and learning. You can find the original video from youtube here Codesistency. Github Repo.

The original project is written in plain javascript but this version has included Typescript support on the frontend. Please do check out the Codesistency Youtube channel for other awesome tutorials. The starter files for this tutorial is found here (Chatty Starter Files)

The project is setup as a mono repo and serves the frontend from the backend. After cloning or downloading the project you should see the backend and frontend with the following structures.

Backend Structure

CHATTY/
├── backend/
│   ├── node_modules/
│   └── src/
│       ├── controllers/
│       ├── lib/
│       ├── middleware/
│       ├── models/
│       ├── routes/
│       ├── index.js
│       └── .env
├── .gitignore
├── package-lock.json
└── package.json

Frontend Structure

frontend/
├── dist/
├── node_modules/
├── public/
├── src/
│   ├── components/
│   ├── constants/
│   ├── lib/
│   ├── pages/
│   ├── store/
│   ├── App.tsx
│   ├── index.css
│   ├── main.tsx
│   └── vite-env.d.ts
├── .env
├── .gitignore
├── custom.d.ts
├── eslint.config.js
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── README.md
├── tailwind.config.js
├── tsconfig.app.json
├── tsconfig.json
├── tsconfig.node.json
└── vite.config.ts

The .env files were not pushed to Github but in the frontend repo we have

REACT_APP_API_BASE_URL="http://localhost:8080/api"

and for the backend we have

MONGODB_URI=<YOUR_MONGO_URI>
PORT=8080
JWT_SECRET=<SOME_SECRET_STRING>
NODE_ENV="development"
CLOUDINARY_CLOUD_NAME=<YOUR_CLOUDINARY_NAME>
CLOUDINARY_API_KEY=<YOUR_CLOUDINARY_API_KEY>
CLOUDINARY_API_SECRET=<YOUR_CLOUDINARY_API_SECRET>
GEMINI_API_KEY=<YOUR_GEMININ_API_KEY>

Caution: Replace placeholders (e.g., ) with actual values from your providers.

You can run the application from the root folder with

npm start

Our goal is to make a chatbot by adding Genkit AI features to this chat application. Kind of like Meta AI on Whatsapp. So in order to do this we begin by installing the genkit engine again here on the backend.

cd backend/
npm install genkit @genkit-ai/googleai

The approach we want to take is to create a model for the chatbot messages and write endpoints that the frontend can hit to send and receive messages from the AI model we setup. So the steps for you to try are as follows:

  1. Create a chatbot model for the chats between the user and the chatbot.
  2. Create an ai.js file in the lib folder, setup genkit here and write simple functions that can take the user input in text and return the ai output text.
  3. Create endpoints to getChatbotMessages and sendChatBotMessage.
  4. Create ChatbotContainer and ChatbotMessageInput on the frontend just like we have for normal users. (This is where the user will interact with the chatbot by calling the chatbot get and send message api’s)

You can make your solution as simple or as complex as you choose. My own solution for this can be found in the “final” branch of the starter repo or with this link (Chatty Final)

Feel free to reach out to tag me to look through your solution through my X and in the comment section as well.

If you read this far. Thank you very much for your time and attention. I hope you have learned something today.

Similar Posts