Exploring Lamatic: Нow to create a RAG chatbot in 15 minutes

09.04.2025

The world of software development is constantly evolving, with new tools emerging to accelerate workflows and lower the barrier to entry. One such solution is Lamatic — a GenAI platform that makes it easy to integrate large language models into your applications. In this article, we’ll explore the recent updates introduced in the latest Lamatic release and walk through a mini-project built using the platform and its SDK.

My name is Sophia, and I work at Innova, bridging the gap between Backend and Data Science. I’ve never really used low-code tools before, I’m more comfortable with traditional programming in Python and Node.js. That’s exactly why I was curious to try out this new approach: to better understand both its advantages and limitations. This article doesn’t aim to take a side in the ongoing debates around low-code — instead, I want to explore one more way of working with AI technologies.

Among developers, opinions on low-code platforms are often polarizing. Some see them as great for prototyping, while others dismiss them as clunky and inefficient. So, we decided to test whether a low-code platform could actually simplify and speed up the creation of POC involving AI.

What’s New in Lamatic 2.0?

Lamatic recently launched a major update. Lamatic is a managed platform-as-a-service with a low-code visual builder, integrated vector database, and seamless connections to external apps and models. Lamatic targets a few key user groups:

  • Developers (for rapid development and deployment of AI agents with minimal code);
  • Analysts (to integrate ML models for data analysis);
  • Business users (to automate workflows and improve customer experience).

Highlights from the Lamatic 2.0 release include:

  • Agentic Flows (a new way to build AI agents through a visual flowchart interface);
  • Serverless Deployments (deploy agents to a serverless infrastructure with built-in caching);
  • Developer SDK (auto-generated GraphQL API and SDKs to simplify integration);
  • Vector Memory & Context (an optimized vector database built on Weaviate);
  • Monitoring (detailed logs, traces, and reports to help improve your workflows).

We’ll use Lamatic to build a chatbot that generates workout plans based on user preferences. First, we’ll create a conceptual flow of the app. This project will help us to try integration with OpenAI, interaction with Weaviate, and working with Google Sheets. In the second part of the article, we’ll switch to using the SDK and Telegram to expand on the project.

Let’s Build Something: A simple workout chatbot

We want to build an LLM-powered chatbot that creates personalized workout plans based on user input. Our list of exercises lives in an Excel file, which we’ll import into the system via Google Sheets. The pipeline looks like this:

  1. Load training data from a spreadsheet into a vector DB (Weaviate);

2. User starts a conversation with the chatbot;

3. The chatbot asks for goals, fitness level, available equipment, health restrictions, and desired workout frequency;

4. Once the user provides input, a request is sent to OpenAI to generate a response;

5. The workout plan is returned to the user.

How can the project be implemented through the platform?

In Lamatic, flows are sequences of blocks executed one after another. You can create a flow from scratch or use a built-in template.

For our file import, we’ll use the vectorization template. To import the contents of a file from Google Sheets, simply fill in three fields. After connecting a Google account and providing a public link to the spreadsheet, we select the sheet to import.

In the Row Chunking block, we define the logic for converting each row to a string:

function objectToString(obj) {
  return Object.entries(obj)
    .map(([key, value]) => `${key}: ${value}`)
    .join(", ");
}


output = [objectToString({{ triggerNode_1.output }})]
return output

We make sure to pass the right variable into the code. Then, we add the OpenAI API key in the vectorization block, which is all we need to connect the LLM.

Before saving data into the vector DB, we define a transformation algorithm in the Transform Metadata block:

let vectors = {{vectorizeNode_193.output.vectors}};
let metadataProps = [];


let metadata = {}
metadata["content"] = {{ codeNode_582.output }}[0]


const row = {{ triggerNode_1.output }}
metadata['exercise'] = row['Exercise']
metadata['equipment'] = row['Required Equipment']
metadata['fitness_level'] = row['Fitness Level']
metadata['description'] = row['Description']


metadataProps.push(metadata)


console.log("finaldata:", {"metadata": metadataProps, "vectors": vectors});
output = {"metadata": metadataProps, "vectors": vectors}

Once deployed, the data imports almost immediately. We can check logs and the Context tab to verify that our training data is stored correctly.

Our logs
Context tab

Now that the data has been prepared, we can move on to the main part of the project. Conceptually, we could use the following algorithm:

1. In order to collect information from the user, we will use a built-in chat widget from Lamatic. This should be sufficient for our mini-project.

2. Next, there is a fork in the path. The bot needs to determine whether the user has provided any information or not. If they have, then we can proceed immediately to the training preparation. If not, then we need to ask more questions. We implement this logic using a special condition block.

3. If the information has been provided, we can start preparing for the training immediately.

4. If no information has been given, we need to ask the user some questions.

To implement this logic, we use a special condition block called a “Condition”. This block allows us to check if certain conditions are met before taking an action. For example, we could use it to check if all the necessary information is in the chat history before asking the LLM to respond. If it is, we would ask it to return 1; otherwise, we would return 0.

It’s crucial for our project that the model only uses workouts from our Excel file. To achieve this, we can use a technique called RAG (Retrieval-Augmented Generation). This technique allows LLMs to enhance the quality of their responses by incorporating up-to-date information from external sources like databases, documents, and APIs.

Let’s set up the “RAG” block. To begin with, we will need a prompt:

Create a personalized training plan following this structure: warm-up, upper body exercises, lower body exercises, stretching, and cool-down.

Make sure to tailor the plan to the user’s preferences, fitness level, and goals.

Provide the response in plain text only, without any formatting symbols like ** or ###.

Now, let’s not forget about the other settings: choose the model, select embeddings, use our vector database as the input data, and use the last message in the chat as the search query.

Once complete, our flow looks clean and straightforward.

Let’s put this into practice. Let’s imagine I am a beginner who wants to train twice a week, but I don’t have any equipment.

The chatbot asks questions and generates a tailored plan. One limitation of the chat widget is that it doesn’t support rich formatting, so responses can appear as a long block of text. For short replies, that’s fine — but for detailed plans, we need a better client interface.

Despite this, our flow behaves as expected. The bot gathers info and then generates a plan based on the inputs. However, our solution does have a few drawbacks. For instance:

1. The number of requests to OpenAI: In just one run, we make two requests to the model and one request for embeddings when using RAG. If our application has a large number of users, the costs could be quite high.

2. The chat widget does not support text formatting, making it difficult to read the response from the model.

3. There is some level of unpredictability, as we cannot guarantee the model will always accurately analyze the dialogue history and produce the desired result. If a user asks for different workouts within a single session, the context may become mixed.

Overall, there is a solution we can implement fairly quickly. We can combine our LAMATIC-flow with the favorite messenger to minimize the impact of these drawbacks.

Project implementation using the SDK

The Lamatic SDK provides functionality for launching flows or agents, which is especially useful when integrating AI scenarios into existing systems. For example, let’s say we already have a solid codebase implementing complex logic. If we want to quickly and easily add some AI functionality to it, we can create a Flow in Lamatic and then trigger its execution directly from the necessary part of our existing code.

How are the SDK and the UI-based flow connected?

One of the key aspects of working with the SDK is that we can trigger a flow from an external source using an API call. In our case, we’ll move all the logic related to gathering the user’s preferences and constraints outside the flow, while the AI part — extracting data and generating a response — will be implemented in Lamatic. Schematically, it will look like this:

We’ll use a Telegram chatbot as the user interface. The bot will collect user input about their preferences, pass it to our backend, and from there, we’ll call the Lamatic Flow.

Currently, the SDK is available only for JavaScript. The documentation includes examples for Next.js and React, but there’s nothing stopping you from using the SDK with any other framework. To initialize the client, you only need the Lamatic project ID and an API key. To generate the API key, go to the Lamatic interface and navigate to: Settings > Authentication > API keys

const lamaticClient = new Lamatic({
    endpoint: "https://inference-api.lamatic.tech/graphql",
    projectId: process.env.LAMATIC_PROJECT_ID,
    apiKey: process.env.LAMATIC_API_KEY,
});

To trigger a Flow in Lamatic, we’ll use the executeFlow method:

const sendToLamatic = async (user_text) => {
    const payload = {
        "user_text": user_text
    };

    try {
        const response = await lamaticClient.executeFlow(flowId, payload);
        console.log("Lamatic response:", JSON.stringify(response));
        return response;
    } catch (error) {
        console.error("Error while sending the request to Lamatic:", error);
    }
};

Now all that’s left is to implement the logic for Telegram. To integrate with the messenger via API, we’ll use the Telegraf library for Node.js. We create a bot through the Telegram interface, obtain an API token — and voilà, everything is ready!

We’ll ask the user a series of questions needed to generate a workout plan. I’ll implement the simplest and most straightforward version:

const askGoal = (ctx) => {
    ctx.reply("What is the purpose of training? (for example, losing weight, gaining muscle mass, or just for health)");
    userAnswers.set(ctx.from.id, { step: 1, data: [] });
};

bot.start((ctx) => {
    ctx.reply("Welcome! Let's find you a personal training plan.");
    askGoal(ctx);
});

bot.on("text", async (ctx) => {
    const userId = ctx.from.id;
    const userState = userAnswers.get(userId);

    if (!userState) {
        askGoal(ctx);
        return;
    }

    userState.data.push(ctx.message.text);

    switch (userState.step) {
        case 1:
            ctx.reply("What kind of equipment do you have? (for example, dumbbells, barbell, or nothing at all)");
            userState.step = 2;
            break;
        case 2:
            ctx.reply("What is your fitness level: beginner, professional or intermediate?");
            userState.step = 3;
            break;
        case 3:
            ctx.reply("Do you have any health restrictions?");
            userState.step = 4;
            break;
        case 4:
            ctx.reply("How many times a week do you want to train?");
            userState.step = 5;
            break;
        case 5:
            const [goal, equipment, level, restrictions, times] = userState.data;
            const user_text = `Goal: ${goal}.\n Equipment: ${equipment}.\n Fitness level: ${level}.\n Health restrictions: ${restrictions}.\n How many times a week user wants to train: ${times}`;

            ctx.reply(`Thanks! There are your answers:\n\n${user_text}.\n Please wait...`);
            userAnswers.delete(userId);

Now we just need to send our message to the Lamatic Flow, get the response, and parse it:

const lamaticResponse = await sendToLamatic(user_text);
if (lamaticResponse) {
    const rawText = lamaticResponse.result.resp.resp;

Let’s take a look at how the flow changes when we move some logic outside of it:

We can see that the conditional branching is gone, and only three blocks remain: the API request handler, the RAG block, and the API response block. This also reduces the number of paid OpenAI calls. Setting up the block to receive the request is very straightforward:

{
  "user_text": {}
}

Let’s get back to Telegram. Since the messenger supports message formatting, the final response looks much cleaner. Let’s run our bot using the /start command and see how the algorithm performs:

Great! Everything works as expected. Now let’s think about…

How can we improve the project?

  • Input validation. At this stage, the user can theoretically send the chatbot anything.
  • We can expand the database of exercises, because right now, we’re relying on just one table. If we want the bot to generate personalized training plans even under the strictest constraints, we’ll need a much larger set of exercises.
  • Add rate limiting also will be useful. To avoid abuse, we should protect the bot from being spammed with too many requests or from accidentally draining our OpenAI budget.

Conclusions and impressions

Low-code platforms are a great tool for rapidly building POC. Lamatic’s advantage lies in how quickly and intuitively you can assemble an AI system. Of course, such tools aren’t well suited for designing deeply branched or highly complex systems, but they’re great for building a part of one.

Personally, as a developer, I really appreciated how easy it is to integrate a Lamatic Flow into your code. A lot of things work out of the box — and that’s awesome. Using the low-code platform has been an interesting experience for me, and we will continue to keep an eye out for further releases.

Thanks for exploring this topic with me! Have you used low-code platforms before? Please share your impressions in the comments below.

Back

Share:

  • icon
  • icon
  • icon
Innova AI Chatbot