Cubes of Jell-o on a Fork

January 30, 20263 min read
Kris Fleming of The Genius Cultivator illustrates the challenges of working with Large Language Model AI using a Jell-O on a fork analogy.

Read the Full Newsletter Issue 2604

I described to a friend this morning that working with large language model AI (like Chat GPT, Google Gemini, etc.) is like walking around with a cube of jell-o on a fork.

Every step you take in any direction moves the jell-o just a little bit, destabilizing it on the fork until eventually, the cube falls off. Then you have to go back to the kitchen counter to get a new cube of jell-o and try again.

Here’s what I mean: Through the course of ordinary communication, I comment to the AI that it said something humorous. It registers that humor is valued by this user, and leans into humor until it first becomes hilarious, and then it devolves to obnoxious. The jell-o has fallen off the fork and I have to go get a new cube and try again. In other words, retrieving a new cube of jell-o means that I reset the AI chat in which I have been working, and it is no longer obnoxious or humorous. Unfortunately, it has no context for the last 30 things we discussed. Nevertheless, I continue on, walking around with my metaphorical new jell-o cube on a fork, and I indicate that I found it helpful that the AI identified a potential challenge to overcome. The AI registers that this user values being contradicted, and it leans into challenge identification until it becomes highly insightful! And then leans too far, devolving into being bossy and demanding. The jell-o has fallen off the fork and I have to go get a new cube.

My current solution is that I use a periodic “save game” function. Remember in old video games (for us Gen Xers and greater "maturity") when every so often there would be a checkpoint to save the game, and if you “died,” you could return to that exact point in the game? It might be ten steps back from where you were when your character died, but it’s not the 96 steps back to the very beginning. About every 15 turns or so with the large language model (I prefer Gemini 3 Pro), I ask it to summarize our progress. I review the summary to ensure it is what I want recorded (make sure we are not saving the fun “overly obnoxious” feature), and I paste that into a document. Then, when the jell-o falls off the fork, metaphorically, I reset to this point by sharing the document with Gemini. I still lose the context of the last dozen or two items we discussed, but I don’t lose the overall arc of the story, just like in our old video games.

  • If this made sense to you and you have a better way, please email me.

  • If this made sense to you and you have questions, please email me.

  • If this made no sense to you and you want me to get back to writing about business mindset instead of this ridiculous AI experiment hyper-focus, please email me.

  • [email protected]

Kris Fleming - The Genius Cultivator

Custom HTML/CSS/JAVASCRIPT

Kris Fleming

Kris Fleming is the Certified Entrepreneur Coach behind The Genius Cultivator, helping Business Owners and Real Estate Investors achieve Resilient Freedom and Generational Prosperity. With nearly 20 years in financial services and investment real estate, she provides practical wealth-building knowledge focused on realizing "You – Distilled." Find Kris at TheGeniusCultivator.com

LinkedIn logo icon
Youtube logo icon
Back to Blog