We become what we see. We shape our tools and then our tools form us. Father John Culkin (19459024).
I used ChatGPT for the first time in December 2022. This was less than a week after its launch, which puts me somewhere between 1 million and 57,000,000 users. I asked for story ideas aimed at people living in Lagos. I was horrified by every single idea, but a lot has changed since the first time.
Artificial Intelligence has quietly shaped our lives before we began talking to it. It determines what videos to show us, helps us retrieve photos, and even predicts the weather on our phones. Google Translate once helped me converse in French with a French-speaking person in Abidjan. This piece isn’t really about the systems behind the scenes, but about the type of AI that I use to think and speak with. This is a story about how I use Generative AI–and how it’s changing the way I think, learn and navigate my daily life.
For centuries, we’ve built tools to expand our minds. We used symbols to store memories. Written language preserves ideas. Spoken language spreads ideas. Papers. Books. Libraries. Printing presses made it easier to duplicate ideas. Then came computers. Then search engines. Generative AI is a new frontier: tools that remix and respond.
I can only communicate with the model using language
As I spend more time with these models, I realize that language is not just a way to communicate to generative AI. It is often the key to domain expertise. I hear oontz, oontz coming from speakers and a musichead can hear a four-on the-floor kick pattern in 128 bpm track with synth textures. You may read this article, but a developer will see component trees, state-management, API calls, frontend frameworks, and clean architecture.
Each context has a language, from Law all the way to cooking dinner. It’s difficult to ask the right question if you don’t know the language. And it’s even harder to think clearly if you don’t understand the language. This has become painfully obvious to me after working with generative AI. When I leave my comfort zone, like trying to describe a music or understand an urban design, my vocabulary limits.
I use reverse engineering to bridge the language gap.
I couldn’t help but think of the original soundtrack to Quiet Eyes by Sharon Van Etten after watching Celine Songās Oscar-nominated film Past Lives. I asked ChatGPT how they would describe the song. It described the song as “a haunting, emotional piece.”
When I asked for more specifics, ChatGPT responded with “sonic ambience,” “meditative,” and “an emotional crescendo which lands in quiet reflection, not catharsis.”
Using those words, I asked another tool – MusicFX by Google Labs – to create music. The track was not replicated due to copyright restrictions. However, the description helped me better understand how tone and arrangement work together.
The entire loop, from asking ChatGPT to describe the song to creating something with MusicFX to learning through creation, took less than 10 minutes. This loop helps me to think better when working in teams. I needed to explain to a product designer a certain feature, but I didn’t know the right words. I collected screenshots of different websites and asked ChatGPT for help in describing what I wanted to say to a designer.
With the language in hand, I quickly built a rough protoype using Lovable. It was powerful enough for the designer and his team to use it. They were able to push it further than me.
I find that reverse-engineering works best for me when I have a low level of domain language. This is true whether it’s legal jargon, product designer-speak, or trying to navigate legal jargon.
I find that reverse-engineering works well for me when I am low on domain language, and only need enough to keep things moving. It’s not about mastery, it’s about functional fluency and that’s enough to make real progress.
The essence of creativity is simply connecting things. Generative AI excels at this
it makes meaning out of mismatches, taking things that don’t always go with each other and whipping them into something new. The creative mind is able to juggle between deep domain knowledge and a fresh perspective. What could be better than an automated system that is trained to surface connections between disciplines and contexts and vast amounts of knowledge?
LLMs are now a part my creative process. They act more as a sparring companion than something that generates ideas. I feed them and they stretch it. It helps me to find coherence when I have half-thoughts. Here’s an overview of how I use generative artificial intelligence across different creative functions.
Creative Function. | How do I use AI? |
---|---|
Recognition of Patterns. | Using ChatGPT, I was able to identify patterns in dozens Independence Day speeches given in Nigeria since 1960. |
I combine research and meeting transcripts regularly to create more coherent memos around ideas. | |
Curiosity surfacing | I often ask ChatGPT questions such as, What am I missing here? What unexpected angles should i explore? What gaps do I have in my thinking based on the outcome that I am trying to achieve? |
Perspective shifting | I created an Advisory Board with experts from multiple disciplines for a project to continually probe my thinking, from Product Thinking through to Organisational Strategy. |
Contraarguments | I once asked the model to critique a presentation I was working on, beyond my optimism, as if it were a sceptical financier. |
Scenario testing | Using different choices, I create simulations of possible outcomes. |
Language Play | In order to test different taglines, call-to action phrases, and headlines, I play with language. |
Structure Building (#19659039) Because I spend so much time editing, I use ChatGPT for scenarios that require me to revisit the same type of decision repeatedly. | |
Abstraction. | |
Mood Mapping. | To gauge the tone, I test different variations of a communication. Harsh? Warm? Etc. |
Analogy Expansion. | I needed to use analogies in order to illustrate a point that was being made by the story. Claude helped me with this. |
It’s not about offloading, but rather about thinking bigger and testing deeper in the messy process of creating something new.
(I try to) keep my models in check
i spend a lot of time studying how models are built, how they can be improved, and most importantly, where they fail. The more I know about their limitations, the more I can trust them for what they’re good at and not blindly trust them where it’s lacking.
ChatGPT has become my daily driver. However, I have learned to not rely on it when I need to verify real-life events. Even though its hallucinatory abilities have improved over the years, I still feel safer not to trust it completely. Perplexity, however, is more like a search engine that has an LLM layer. It gives me links to pages I can read and click on.
Sometimes, I have to work through material in an enclosed environment where I need to be able tell the difference between the LLM’s knowledge and the knowledge I feed it. Google’s NotebookLM is a great tool for this. NotebookLM is a purpose-built tool that only uses the knowledge you provide. While ChatGPT relies on its global understanding, it operates solely on the information you feed it.
Leashes are not meant to be a means of limiting. It keeps me informed as a decision maker at every step of the process.
Through debate, I refine
Thinking is writing. When I want to make sense of what I’m thinking, I write. It helps me to see what I’m thinking and where the gaps are. It also shows how everything fits together. It’s for this reason that I scrutinize every first draft: I want to stretch, interrogate and refine it until it is rigorous. AI is treated the same way. I don’t expect to get great results the first time. When people complain that their LLMs don’t help, I often ask: Are they reacting to the initial prompt?
I use debate as my process. I don’t agree with their suggestions and ask them to push back. This tension is where thinking happens.
My process is different when attempting to document an idea, process, or creative writing. Non-fiction and journalism are the most common examples. My goal when establishing a writing process is to ensure that all parts are connected neatly. I don’t trust my creative writing process enough to collaborate deeply yet, because I’m trying evoke or expand feelings.
Context is my swing vote.
Ten years ago, the only way to see a reflection of your own face on the Internet was by taking a quiz called “What Kind Of Bread Are You”. But that has changed. The model is generic at first, but you gradually bring it into your life. It builds context by asking questions until you can ask it “based on all you know about me tell me x”. ChatGPT is my daily driver.
I’m not sure why I would use anything else. I test new models about every two weeks, but always return to ChatGPT. It is so familiar with my wife’s tastes that I can send a photo and it will make an accurate recommendation for takeout. It knows what’s inside my fridge because it has seen it. It can choose the best-fitting shirt based on your body type if you can’t decide.
It has become, in a way that is sometimes unnerving, a living memory. A subtle but powerful presence. An extended mind. The creepiest manifestation came a few months ago when I asked ChatGPT about Ogboni. Ogboni is the traditional religious and social fraternity of Yorubas. ChatGPT answered in detail, even including the titles in Ogboni. I knew it, but I ask ChatGPT a question I already know the answer to to see if they get it right.
I asked “Based on all you know about me,” “what would be my title if I was one of the Ogboni?” and it said Apena. It was a cool response. It was also my great-great-grandfather’s title–I’ve never shared this information about my family with ChatGPT.
Ambient layer
Many of the technologies that have become useful started out as toys. We used cybercafƩs to access the internet, but now our accessories are also connected. You used to go to a specific website or page in order to make online payments, but now you pay at the checkout.
In Google Docs where this article was written, Geminiās four-pointed stars is waiting to assist, while Grammarly is catching all my spelling mistakes without asking. In my neighbourhood, I’ll sometimes be seen taking walks while talking into my phone and making up ridiculous scenarios with ChatGPT, because a transcribing feature makes low-friction, high-information-capture possible.
It will then become proactive.
Certain to ols are already more proactive than others. Grammarly, an AI grammar tool, has already started detecting errors in every text box on my browser. One day, it won’t seem so far. My Fitbit can tell my journaling application what my activity levels are. My journal may be able generate insights such as, “You tend have better moods when you crossed 8,000 steps the previous day and had a restful night.”
Things will shift even more. My AI will no longer be something I use but something that moves along with me. It helps me understand my rhythms and patterns by observing them. The more seamless the interface becomes, the less I will notice it.
It was pen to paper thousands of years ago. Then came recorders. Then came computers. It feels like they are all happening at once. This time, it learns from you, listens to you, and most importantly thinks with your thoughts.