top of page

Dominating 2024: My Unfiltered Take on Google's AI Game Plan

Discover How Google's AI Gemini is Revolutionizing the Future: Unlock Multimodal Learning, Boost Your Productivity with AI Agents, and Transform User Experiences Beyond Imagination. Dive in to see how you can leverage these cutting-edge advancements for massive growth and success!
Meir Sabag
45 min read

Earlier, I wrote to you about my report from the important Google event. Now, I want to give you my interpretation in a more informal way (I'll be using a conversational tone) about what happened there: what was presented, how it can help us, what to expect, possible uses, and more…
 

I don't know if it's just me, but for me, this event felt like walking into a candy store (even though I'm in a cutting phase).
 

It's crucial that everyone reads my interpretation and sees how they can draw inspiration and strategies for themselves and how to implement them immediately because the number of opportunities is simply insane!!!

So, without further ado, let’s dive in….

Gemini's Capabilities

For those who aren't familiar yet, Gemini is a language model from Google. It's essentially Google's version of GPT.

It's very important that you try out various language models because each one contains billions of parameters. While the operational mechanism of all of them is very similar [I'll write about this in the future], the training methods, data, and especially the way the model is given a "thinking style" and reasoning methods make a big difference.

Since we're here not to judge but to see how things can work to our advantage, I say, let's use everything while leveraging the advantages of each language model.

This is also why, in the architectures of my AI agents [I promise to show you this in the future - just wait, we just met], I actually create agents from different models.

In my opinion, Gemini is a model that is mainly freer in terms of its limitations. It's a bit less conservative and is very, very good at retrieval and reasoning from the context we provide.

Maybe it's because it comes from Google—a company whose DNA is searching and ranking information. Now, to put things in the right context—our Gemini has a context window of 2 million tokens.

What does this mean in practice? It means it can talk to you and use methods to inject a specific knowledge base of up to 1.5 million words into it.

And friends, that's a lot! Imagine being able to include the entire Harry Potter series (and still have room left) and being able to ask the model a specific question about a very specific event, and the model will simply answer it without any hallucinations, based on real information.

When you combine this with Gemini's conversational abilities, it becomes a serious tool for the challenges we all face daily.

The Three Core Capabilities of Gemini as Announced at the Conference

Gemini has received new input capabilities based on images or videos. What does this mean in practice? It means we can input speech into the model (through things we say to the model) and also video to support information that is hard for us to describe.

Remember we talked about Gemini's immense context capability? This is exactly where it comes into play. We can upload a video up to two hours long, and it will still be relevant for any task or analysis the model wants to approach according to its discretion.

Let’s give a very complex example: Suppose I want to code a script from scratch.

The task is:

We want our Gemini to be our assistant and to follow everything we do so that in due course, it can take control.

Think of it as a navigator we're about to tell: "Listen, navigator, you're about to get the plane's keys and become the sole pilot."

Before we dive into the example, I want you to notice the complexity of the task [I'm really excited about this] and that there is no magic prompt that can solve this task. Instead, it's about using language model operational principles.

So, what are we going to do?

We put on headphones connected to our laptop and open software that records everything we do on the screen.

Then, we go out for an hour and a half of coding, preferably talking aloud about our inner thoughts.

What have we essentially done?

We activated several principles such as:

  • Creating our specific knowledge base.

  • Creating few shots.

  • Activating the model's subconscious learning.
     

I talk a lot about these three principles in my product [insert link here]

But if I need to simplify it, I would say it's about teaching the model through doing, not explaining—just like humans.

In this way, the language model knows how to direct its attention and context of its billions of parameters to contexts over which we have no control or knowledge, but they undoubtedly affect the final result.

It’s important to note that I didn't say we calibrate the parameters and fine-tune them better. Because that's fine-tuning, and it's already an art in itself with significant advantages that we won't go into here.
 

Okay…So what can we do after this hour and a half, and how does it relate to our task?

After our Gemini watched and learned our process, we can now take it in many directions, such as:

  • Getting feedback on our work methods.

  • How to streamline our work method.

  • Asking it to learn this and store it in its memory.

  • Asking it to find and fix bugs (after we give it the code itself).

  • Asking it for new edge cases we hadn't thought of.

  • Even taking it in the direction of programming new features.

  • And so many more things….
     

My goal is not just to describe this to you; it's important to me that you understand the operational principles of a language model.

In principle, one could say that we can explore all these directions with the language model, even if we only provide it with the code.

And I agree that we'll get nice results here too.

But remember!! We are here to take the language model beyond the peak where most settle for easy work with as few prompts as possible.

When we know how to feed the language model with visual and auditory senses (memory and reasoning methods can also be combined, and then the sky's the limit), we cultivate its "subconscious" area.

At that moment, not only can the result improve, but mainly its reasoning ability.

This is crucial in our communication with the model: both to solve hallucinations but mainly to create high personalization with a lot of creativity.

This is why I'm against this whole magic prompt story—a single (or two) pre-made prompt that supposedly promises to give the desired output.

We aim to push every language model beyond its peak and squeeze every bit out of it while creating optimal personalization for us.

Therefore, it requires forgetting about the magic prompt and knowing how to combine the principles that make up the correct communication with a language model.

Not only is everything very preliminary, and those who know how to use it earlier, gain a significant advantage in business and their growth.

The bonus is that this field is advancing at a dizzying pace [I can tell you that I have a stack of content that in my eyes is already irrelevant and was written two months ago], and the more we master these principles, the easier but deeper we can adopt innovations.

Because our unfair advantage is that the principles of using a language model do not change; their impact just becomes more refined.

Use-Case: "Ask Photos"

Based on the capabilities we described earlier, one of the applications Google integrates is the ability to "talk to your photo album." But this isn't just talking based on tagging; it's understanding-based information (and that's the big story of a language model).

For example: "When did Lucia learn to swim?" or "Show me how Lucia's swimming has progressed."

As I mentioned earlier, one of the most important principles in interacting with a language model is the ability to shape the conversation environment with as many information vectors as possible. Google integrates this perfectly into this feature because each photo contains metadata composed of many data points such as date, location, body language in the picture, and more…

All this together connects to several layers of information for each photo. My educated guess, since Google mentioned their AI agents, is that their algorithm consists of a whole web of AI agents that know how to be created dynamically while each team of AI agents manages a group of photos from a specific set of narratives.

Since the cornerstone of the technology enabling all this goodness is vector proximity (each word is represented by a number in a very large parameter space), we get a context-based result.

I'll stop here with the algorithmic analysis of this feature but just say that the point here is not to search for tagged words but simply to talk freely—just like we talk to ourselves to bring back memories from the past.

That's how this feature works.

It's important to me that you notice: see what happens when you take an object (in this case, a photo), connect it to metadata, feed this machine with high-octane fuel in the form of a few prompt engineering principles, and we get a turbo machine with crazy value that would have been considered very complex until about a year and a half ago.

NotebookLM: From Passive Learning to Dynamic Conversations

I want to describe to you this beautiful thing that, unfortunately, is only available in the USA.

In fact, it's so beautiful and efficient that I implemented it for myself in a very private and non-scalable way—now that I think about it… maybe someday I'll sit down on it and give it to the community for free.

**Anyway… we're talking about NotebookLM—here's the link: https://notebooklm.google/**

NotebookLM is a tool that knows how to save the various pieces of information and content you can create, and through Gemini's capabilities, you can interact with your information.

You can upload pieces of content such as notebooks, lecture slides, and ask questions about this information.

In fact, the more creative part is the ability to synthesize new information and insights based on existing information using Gemini and while guiding the model through conversation.

Here too, we see principles of conversation design and prompts that are clearly not based on the magic prompt.

The principles here are based on:

  • Injecting personal knowledge base through content pieces we upload to the notebook.

  • Creating a flow of conversation to retrieve specific information or generate new insights based on existing information.

2. Long Context

Our Gemini has the ability to build its answers based on up to 2 million tokens (about 1.5 million words) in the same conversation.

What does this mean in practical terms?

Imagine you can pour into the conversation with Gemini all your emails in Gmail, and it will know not only to bring you a sentence from a specific email from five years ago but also to build analyses such as writing emails in your tone of voice, open emails, emails containing tasks for the next few days, etc.

It's important to understand this core capability—it's essentially about memory almost as our memory works.

And in general, it brings the technology very close to the retrieval level of our memory.

When this context window is limited, we need to activate advanced technological capabilities using creative use of several APIs to reach these levels.

And even then, there may still be limitations due to the language model's constraints.

But now it seems the trend is moving towards making memory capabilities for language models very, very simple, so it generates personalized solutions up to the specific user level (that's me or you).

It's important to say that the price is still very high when we run a prompt with a context window of a million tokens (if I'm not mistaken, it's around $7 per query).

But it's clear that the price will drop (in the latest announcement, OpenAI cut the price by 50%), and capabilities will rise—it all depends on available hardware, and here we enter the story of NVIDIA, which we will leave aside for now.

To sum up this topic: creating memory in an accessible and simple way as part of the principles of how to talk to Gemini is very critical when we want a response that is not general but personalized. Gemini gives us plenty of memory, and from my experience, I can say the model works wonderfully with a huge amount of text.

3. AI Agents

In the New Age of AI, the User Experience of the Search Engine Also Changes

I recognize more and more movement from an experience based on presenting information to information based on the final result.

Think about it for a moment…

We're no longer just looking for information; we want some hybrid of information that is specific to our needs. In fact, we're not even sure we want synthesized information anymore; we want to get a result.

The evolution of artificial intelligence is going to change the entire user experience as we know it, the entire way technology impacts us (with the same principles but in a different implementation form).

Google also responds to this.

They do it through their flagship product—the search engine.

This manifests in searches for information like: "Find the best Pilates studio in Boston, show me the details of the offer and the walking distance from my current location."

In other words, the change in experience happens already in the input process where we can afford to create a search composed of several needs simultaneously.

And it manifests in the experience where we get the final result—by combining Gemini, we no longer need to provide an experience of sending the user to the studio's website.

Instead, we take the relevant information and create something entirely new from it.

That is, the user experience changes not only in how we provide the input but also in that we can control a unified interface according to our desire, and at the same time, and oddly enough, it is very dynamic because we know how to capture all the vast variety of search configurations on that same UI component.

In other words, we see here that although the solution's complexity increases, the implementation form of the solution actually decreases, and still, the value proposition increases exponentially.

Workspace: Supercharging Productivity and Collaboration

Looking Ahead and Empowering the Future from Google's Perspective

Android: Transforming Smartphones into Truly Intelligent Devices

In the announcement, we saw that Google is not only addressing its existing products and services but also touching upon its additional growth engines.

This is very interesting to see, as it acts like a predictive oscillator reflecting the thoughts of Google's top executives and how they interpret trends, as manifested in their projects and innovation drivers.

So, let’s talk about it…

AI Infrastructure

Google has made a leap in the performance of model training capabilities.

On a personal level, I invest heavily in stocks of companies that have any connection to computing power. I believe computing power is already the new digital oil.

We see this in the following business movements:

  • There’s no need to talk about Nvidia; that’s already quite clear.

  • Meta has purchased a lot of H100 processing cards for training their language model, llama3, which leads us to the next point.

  • Using an open language model - Open language models have critical advantages for tech companies and, in my view, for any business that wants to leap into the next decade and gain a crucial advantage. But this requires a lot of processing power both to run the model and to train it. This processing can be done on personal computers (we see Apple’s announcement with the M4 chip), or by purchasing on-demand processing power.

  • Apple is working on a compact language model that will operate on the iPhone's processing infrastructure. In my opinion, this is a super interesting trend as it will allow developers and entrepreneurs to create many interesting value propositions that leverage the advantages of local processing on the iPhone combined with external cloud processing. Either way, this is another interesting oscillator showing the importance of decentralizing the processing power of language models.

  • Training a language model holds immense benefits for any business - both in terms of costs and the value of result accuracy and response speed. Training a language model is a kind of "patch" on the existing language model and, fortunately, it allows training with relatively low resources (and the DataSet can be relatively small), but it still requires processing power.
     

I’ve only mentioned some of the business movements that broadly indicate a huge demand trend for computing power.

Google is strongly in this race through Trillium, thereby affirming what has already become quite clear.

1. Beyond Text-Based Interactions

Here we have reached the field that, in my opinion, is the most intriguing.

This is the field I live and breathe almost minute by minute. I'll touch on this briefly, and of course, in the future, I'll write extensively about this and demonstrate super cool examples of AI agents and how fascinating this technology is.

I have no doubt at all that this is the future.

Before we talk about what Google discussed at the conference, I'll briefly explain what an AI agent is.

An AI agent is essentially like GPT, but it knows how to take on a role that we define for it, hence it's sometimes called an assistant.

I'll give you a simple example: "GPT, from now on, you are a programmer specializing in writing in Python with 10 years of experience in object-oriented programming." At that moment, we essentially have an entity based on a language model that knows who and what it is.

Of course, in reality, it's much more complex, and there are dozens of lines of code defining the AI agent. I promise to show you a demonstration of this in the near future.

But what I explained to you is only half of it.Why is it only half?

Because we said an AI agent is like a language model (with behavior we've defined), so it needs to receive input, right?

Usually, we input the input during a conversation.

But what happens if the input comes from another AI agent? That is, a loop is created where two AI agents talk to each other without our intervention.

Cool, right?!

And what happens if we create another five AI agents that all talk to each other? In fact, there's research showing how the more AI agents we create, the better the language model's capabilities, even though it's the same model.

That means we can take GPT-3.5 and bring it to a solution accuracy level like GPT-4.

I won't go deeper than this, but I hope you started to grasp and hold onto this concept called an AI agent.

At the conference, Google talked about their AI agents with which they implement several applications in their workspace (as I'll discuss further).

Their innovation (besides the fact that this whole thing itself is new) is based on using proactive capabilities.

Proactive technology is a technology I'm currently writing the second version of.

This basically means the AI agent is diligent and doesn't wait to be told what to do; it knows how to initiate processes by itself.

I'll give you an example of my implementation:

Suppose I have an AI agent that knows how to analyze images on social media to generate metadata from them. In a completely different process, I create a campaign for a specific niche. Our AI agent knows to listen to the campaign I'm creating and, to refine it, automatically thinks to itself: "Maybe I should look for profiles of this target audience and see if I can make the campaign even more personal."

The moment it gets this trigger, it has the independent ability to live in the digital universe of humans and start surfing the internet to complete its tasks.

By the way, although I say AI agent, it's actually a team of 17 AI agents working together like an orchestra.

So, Google's agents work for them when they need to proactively search for an invoice in the email or pay the electricity bill.

In my interpretation, I see how an AI agent-based architecture is actually the next generation of solutions to the extent that it is the next evolution in programming.

In my opinion, existing programming languages will become more primitive, and slowly programmers will be those who are most creative.

I believe any solution based on a network of AI agents will be both more advanced and consume fewer programming resources, thus shortening the development cycle.

This is why, by the way, we see a crazy pace of innovation. Solutions I never imagined possible a year and a half ago are already becoming irrelevant today.

Research

We are all familiar with Google's workspace, which is broadly based on organizing our information: Gmail, Docs, etc.

We see the trend where large companies such as Apple, Google, operating systems like iOS and Android, Amazon, etc., integrate the language model engine to enhance the user experience.

In my opinion, this is a welcome but also obvious trend. What is interesting to see are the things the language model can do behind the scenes.

In my view, the real power of the language model is its ability to produce much more expression but at the same time simpler implementation of algorithms.

In fact, the language model opens up the possibility for us to let our imagination run wild, and at the same time, the algorithms are much simpler from the implementation aspect but not only that.

Until today, to solve problems using artificial intelligence, we needed a lot of data to create a learning curve for the model.

And even after we trained the model, it was often aimed only at a specific use-case or a relatively narrow range of cases.

With the language model, we can both let our imagination run wild but also have the ability to democratize the learning process by synthesizing the dataset.

Now, it's important to understand something, and it's very important that we all internalize this!

The almost sole advantage of startups over large companies is their creative ability and their ability to move quickly.

Startups with entrepreneurs who can’t sleep at night out of excitement are like a commando force that knows how to move quickly and lightly, not only adopting innovation but actually creating it.

When it comes to implementing algorithms with a short learning curve to train the language model in the context of algorithm expression, the relative and unfair advantage of startups over large companies even grows.

We can see, for example, the French company Mistral 7B, which was founded in April 2023 and is currently worth a lot. Mistral 7B essentially creates a language model with an architecture of small models that are actually a kind of experts when there is a model that knows in real-time which expert language model to send the input to for processing.

Back to the topic of Google integrating its language model engine (Gemini), by this integration, we actually give life to all the text we have in Google's workspace.

We get possibilities like:

  • Summarizing emails or documents.

  • Question-asking capabilities.

  • Building new insights.

  • Information analysis.

1. AlphaFold: Revolutionizing Protein Folding

Similar to Google's workspaces, the trend is maintained in Android.

Gemini is embedded within the main and daily uses of the operating system, such as search and a much more personalized experience.

I estimate that Personalization 2.0 is going to be the main value proposition in every algorithm, user experience, or application in the coming years.

I think we'll see this even in fields like cybersecurity or fintech.

This is the main reason why I think for us—entrepreneurs, there is an insane opportunity to take the existing value propositions and turn them upside down.

This is exactly what Google is embedding in the operating system and essentially in every workspace. This is manifested in Android search engines, the personal assistant, functions like spam detection, and more.

I estimate that this is exactly what we'll see in Apple's announcement next month at its developer conference. This is even more true when we see Apple in advanced negotiations with OpenAI to tightly integrate the GPT language model.

It's going to be very intriguing, Apple's announcement, not because we'll see something that Google hasn't already shown us. But because it will be interesting to see how they implement it in the smallest details.

It is customary to think that Apple is not innovative or that it misses the AI train.

On this issue, I disagree with many on this thesis.

Apple's innovation does not stem from its originality but from the value it knows how to make accessible to its target audience.

Apple knows how to learn very well from others' experiences while applying its design strategy, and of course, the immense ecosystem combined with their brand knows how to do the job.

It will be interesting to see how Apple implements these principles.

It’s going to be a real inspiration for me because there are many unresolved pain points I’m debating, and I’m curious to see if Apple will give me inspiration for this next month.

All the announcements of the innovations in Google's operating system can be found in my report here. [insert link]

2. Robotics Technology

As a direct continuation of processing infrastructure development, there is DeepMind, which is the home of all AI research as a rising power and, in my view, a leader in many fields of AI application.

We see this in areas that I find very interesting, such as:

A Glimpse into the Future from My Perspective

[[[[ Attach the video explaining what AlphaFold is - link: https://deepmind.google/technologies/alphafold/ ]]]]]

I will explain simply what it does so that we can later understand something bigger and more interesting.

The beautiful thing is that by understanding what AlphaFold does, we can understand something even bigger and broader, even though AlphaFold itself is already a huge thing in my opinion. This just shows what a fascinating era we are in - how many opportunities and abundance there are for anyone who wants to express themselves in the world, create, provide value, leave a mark, and improve their business.

I truly believe we are in the most fascinating period in human history.

So, let’s describe at the "micro" level and in a simple (and very superficial) way what problem AlphaFold solves and how it solves it using a language model.

The Problem:

Imagine we have a code with, say, 10 fields, where each field can take any digit from 1 to 20. We also have a room with 100 safes, each with its specific code. Any certain combination of code can either open a specific safe in the room or not open any safe.

We already understand that there are many possible code combinations - in this case, 20 to the power of 10. How can we know the code for each safe? This is exactly the problem. To solve it, we need to do some sort of trial and error for a large number of combinations. The analogy of the code is essentially an amino acid chain that produces a specific protein, which is very important for the body's function. The body function, in a certain aspect (e.g., blood pressure), is analogous to the safe.

The Solution:

AlphaFold is the algorithm that can identify the code that can open any safe in the room. Since numbers are essentially a type of word, we can use a language model by training it based on all the code combinations we do know. The model generates fine mimics by creating new weights for billions of parameters, allowing us to start predicting the code for various safes much more efficiently. This algorithm is a tool that can help safe crackers quickly reach the desired code, thereby creating more effective and safer drugs (like for mRNA technology) and doing so faster.

The problem-solution description presented here is very simplified, and although the contribution of this technology is immense for the world of research, medicine, and humanity (and all living beings on Earth), it is still just a specific point in something broader.

Because we can see in this solution approach how, if we make an analogy or conversion (in mathematics, this is called reduction) to a language model, we gain all the advantages of a language model. Using the language model, we solve the problem and then return the answer to the relevant case, which in this instance is protein chain prediction.

we see the critical importance of knowing how to work with a language model using several principles that repeat in dozens of cases.

We must not think there is a magic prompt that can solve a problem for everyone, but rather it is about applying several principles to a specific problem for a specific person seeking a specific solution from a specific perspective.

[[[[[[ Landing Page ]]]]]]] [[[[[ Important to note that this is an introductory price of $97 instead of $197 ]]]]]]

The robotics category has received a serious boost thanks to language models. Here too, I want to describe my interpretation of the matter, but this is just the micro-level of something broader, allowing each of us to use it.

The goal of robots is to know how to operate in the real world. As a result, there are parameters the robot needs to handle to mimic us humans.

Just think about the thousands of parameters related to moving various joints to perform one simple action - for example, a robot that knows how to be a dog (yes, let that sink in for a moment, we live in a crazy era).

[[[[ Insert a video of a robot dog here: https://deep-website.oss-cn-hangzhou.aliyuncs.com/video/Lite3.m4v ]]]]]]]]]]]

To achieve human-like imitation by robots, we simply need two main elements:

  • The robot's physical capability

  • The robot's cognitive ability to control its physicality in a real environment
     

To achieve this cognitive ability, we need the ability to teach the robot how to behave in the real world - just like a small child learning about the world.

This is a fascinating field, and I won’t delve into it here, but broadly speaking, it involves creating simulations for the robot. Through simulation, we allow the robot to gain life experience and cognitive abilities.

the language model manifests itself incredibly, pushing the entire field tens of kilometers forward.

If we use the reduction principle I described earlier, we can create an analogy to the language world. To simplify, imagine you can tell each muscle exactly how to move to create a specific movement, and you do it very quickly. Simultaneously, imagine you’re walking in a world where every object knows how to talk to you.

This, in a very (very) general and abstract version, is a simulation of the real world. With this simulation, we can train the AI model, expressed through the language model, on how a robot would behave very similarly to humans, so much so that it could replace us in many real-world scenarios.

Imagine one day your housemaid is a robot, speaking like you, with body language like a human, and the way the robot performs its tasks is exactly like us.

In fact, this is already starting to happen - the future has arrived in the present.

The broader principle I want to expose you to is essentially simulations.

Simulations are a very important tool through which we can teach our language model how to live and operate in the real or digital world of humans.

We see this trend manifested in the following fields:

  • Investments

  • Venture capital funds

  • Startups

     

In my view, this can be a game-changer tool for business owners (both large and small) who are not necessarily tech-savvy. Here lies the great opportunity to create a game-changing weapon.

It’s like giving David a shotgun against Goliath, just before their battle 3000 years ago.

I want to show you again how creative use of a language model, according to those same principles, creates a tremendous technological leap forward in a very accessible way that anyone can start learning and adopting for their needs.
 

[[[[ Landing Page ]]]]]

I wrote this article in combination with the article summarizing the Google event [[[[ Insert link to the article ]]]]] so that we can see the trends together from Google's perspective and how it integrates with how I see things.

All the applications we saw at this event express a clear trend of integrating the language model in a way that it essentially generates reasoning according to the thought process of the person interacting with the machine.

I will explain this more precisely: Seemingly, a language model is a probability machine that knows how to predict the next word based on many preceding words.

We don’t know on what basis it predicts the next word or the connections that led the model's neural network to a specific answer.

Every language model (which is essentially an AI that knows how to speak) is composed of many layers of thought processes. In fact, this is one of the main updates between, for example, GPT-3 and GPT-4.

I’ll give an example: If we ask the language model to write us code for a simple game like Snake and for it to be written in Python. The model will work with many layers of thought processes on how to write software and, in particular, how to write it according to the rules of the specific language chosen, in this case, Python.

So, we see in it a realization of reasoning and thought processes much like humans.

It is composed of many layers that I personally study - imagine I’m like a psychologist of the language model.

Therefore, the core opportunity that the language model creates from my perspective is: the possibility of capturing human reasoning into a language model and retrieving it in real-time while performing various tasks from daily life in any field.

This is an infinite field in my opinion, as it is also composed of layers upon layers, with the top layer being your reasoning, your perspective.

In my opinion, it’s exactly like falling in love with a girl who is very easy to fall in love with (let’s say because she’s very beautiful), but only you know why you really fell in love with her. You see something very deep in her, and that’s what connects you to her.

This reasoning is what makes it a very special emotional connection that turns into a story unlike any other in the world, just like the two characters involved in it.

This connection can give birth to a family with a very special story. A shared book that only these two people can create, and so on...

This analogy is exactly what a language model can do for products.

Don’t fall into the thinking of "something like this already exists" or be tempted by obvious applications that are simple automation.

Instead, take advantage of this beautiful thing and take it to your places. To the thing that excites you and makes you feel free.

Then...

Then you will feel how technology is a part of you, not you a part of it.

[Section for my landing page] [Insert the USP you have in your notebook]

bottom of page