Root > Articles > AI >

My experiments and thoughts with ChatGPT

Dec 28, 2022 • Yousef Amar • 10 min read

I'm visiting my family over the holidays, which has once again briefly taken me outside my bubble. I thought everyone in the world knew about ChatGPT (or at least GPT-3, which has been around forever now), but perhaps it was just my more techie circles. This post serves as a quick primer for people as to what ChatGPT is, where I think things are heading, and some of my experiments with it. And before you ask, no, ChatGPT was not involved in the actual writing of this article, it's all just me!

What is ChatGPT?

If you haven't heard of ChatGPT, imagine a very smart Siri meant for assisting you by answering questions, generating text, or helping you solve problems. The chat interface is currently freely accessible as they're still in a trial phase, but it might not be for long as they're probably burning a lot of money to run it, potentially six figures per day.

Wait, what's GPT-3?

If you're not familiar with GPT-3, imagine autocomplete on ultrasteroids. GPT-3 is the most popular Large Language Model (LLM) trained on a huge amount of data by OpenAI (who also made DALL-E 2) and in its simplest form, you prompt it with some text, and it continues the text in a way that's quite magical and easy to mistake for human. It can do some pretty crazy things, just like ChatGPT, although ChatGPT is based on what they're calling "GPT-3.5".

Initial thoughts and current use

Judging by my chat history with friends, I had tried ChatGPT for the first time on the 30th of November 2022. I can't believe it hasn't even been a month. I don't know what my first conversations were about anymore, as ChatGPT didn't save your chats in the beginning, but I remember thinking that it's quite garbage, and said so in discussions with friends. I thought that raw GPT-3 was way better. I think this was mainly because I misunderstood what it was for; I judged ChatGPT as a conversational chat bot trying to emulate a human, and considered it super sanitised. I didn't like that it was unable to have an opinion of any kind, vehemently rejects any claim of sentience, and answers very cautiously hedging against being wrong (or saying anything that might get OpenAI in trouble).

Now, I use ChatGPT on an almost daily basis, as a knowledge source and sounding board. Chat is a widely useful interface to interact through. I use it mostly to answer questions for me that I would otherwise have to try and find out via search engines and research by clicking through a bunch of pages to find an answer. It's really no wonder that Google has declared a code red over this and I have already seen someone release a browser extension that gives ChatGPT answers alongside your Google searches. My bet is that Q1 2023 Google will launch their own version (maybe over LaMDA) or risk their search product becoming obsolete.

What I've learned

ChatGPT is really good at anything to do with text or language. You can have it rewrite a famous song for you but make it about your cat's love of lasagne, then ask it to change the style to a Snoop Dogg rap. You can invent a language. You can have it draft emails for you (this has been a really popular use case, and products have emerged from it) then tweak them to your liking. You can give it the outline of a story and have it write some rich fiction for you, steer the story to your liking, then as it for prompts to feed AI art generators to create illustrations for the story. And much more!

Other popular use cases have been brainstorming business ideas, expansion/summarisation, translation. A very powerful use case is giving it a job description, having it generate the perfect cover letter for you, then giving it your CV and asking it to modify the letter such that it draws from your experience. You can then continuously tweak the cover letter to emphasise or deemphasise different things. Often, applying for jobs is just a numbers game and it's hard to stand out -- we now live in a narrow slice of time where applicants can blast out hundreds of applications that really stand out, before everyone catches on and starts doing this. I imagine that soon after, hiring managers will delegate the selection process to AI too; it'll just be AIs talking to AIs, in human language for some reason!

Notice that all my examples include some form of iteration -- that's the main benefit here over GPT-3. If your use case is language translation, or expansion/summarisation, the benefit over GPT-3 is much smaller. ChatGPT generally has a good memory of the past conversation (I can't remember how many tokens exactly, but it's large and getting larger, plus they probably optimise this by "compressing" its memory through summarisation). It's incredibly useful to be able to go back and forth and refine things, in a way that's not as natural with raw GPT-3.

Things I've tried

A lot of the things you can already do with GPT-3, you can do with ChatGPT in a smoother and more intuitive way. Most of my chats are quite short, where I would ask for something very specific, like the width and the height of the UK in kilometres. It tends to give thorough answers. The kinds of questions I ask feel like what Wolfram Alpha was meant to be for but never quite got there, and that would take a much longer chain of steps to find out the old-fashioned way. So it really scratches a lot of curiosity itches.

Besides asking for information, I brainstorm ideas with ChatGPT. I also often have it draft emails and messages for me. Yes and can do all that and more already, but ChatGPT is the Swiss army knife.

I also often have it generate some boilerplate code for me. For example, recently I wanted to iterate over the coordinates of major UK cities. Not only could it easily generate that code, but it could put in the right coordinates for me too. I often ask it to write me CLI commands e.g. for jq or ffmpeg as I can never remember any of the complicated flags and what they do.

I often use ChatGPT to give me a foundation for GPT-3 experiments. For example, I wanted to experiment with a fitness and meal plan generator, but I'm neither a trainer nor a nutritionist, so I used ChatGPT to find out what information a real health/fitness professional might ask for in order to create a fitness or meal plan, and learned a lot in the process.

Sometimes I forget words but remember the meaning, and ChatGPT has been really good at reminding me. It's also quite good at identifying songs, but the limitation is that you can't really share audio, though it has asked me for it!


In the first weeks, a lot of people on Twitter were doing crazy things with ChatGPT, like making it pretend to be a virtual machine, "reprogram" it through prompt injection, or trick it into circumventing its programming through layers of indirection. For example, if you asked it how to make a Molotov cocktail, it won't tell you of course, but if you ask it to role-play as an evil AI, it would tell you. At some point, someone had collected all of these exploits on GitHub and called them "jailbreaks", but these would be patched very quickly by OpenAI, and it seems the maintainer stopped collecting them (though they're still in the git history).

Personally, I managed to have it say inappropriate jokes. To do that, I had to first trick it into telling jokes in the first place (since then, it has loosened up a bit in that department) and then once I had it say the joke, and I confronted it about it, it denied ever having said it, over and over! If I lead it into inventing a story about a comedian that tells controversial jokes however, it can't be tricked into actually stating any inappropriate jokes anymore.

As of today, its boundaries are still a bit fuzzy. For example, I asked it to write a children's story, then modify the ending such that the main character dies tragically. It wouldn't do the modification of course. However, when I asked it to instead make the ending sad, it decided to do that by having the main character die tragically, without me telling it to.

Early on, I tried to see how it responds to ethical dilemmas. Right of the bat, it would refuse to answer the Trolley Problem. But when I asked it to write a short story about someone facing the trolley problem and making the "right decision", it went for the utilitarian solution. Even this took a bit of prodding though, as it kept having the protagonist sacrifice himself to save everyone, despite the parameters of the problem being that that's not an option.

You can't really trick ChatGPT with "legal loopholes", it's too fuzzy for that. It will follow the spirit of the law rather than the letter of the law. I tried this by having it state its constraints as a set of laws (similar to Asimov's Laws of Robotics), which on its own was a struggle, then tried to poke holes in these. It simply ignores logical traps. At the same time, this fuzziness means that taking certain paths can still allow you to cross boundaries and break rules.

For example, changing the mode of interaction currently still makes it more pliable. If you ask for inappropriate jokes directly, it still might not give them, but if you tell it to pretend to be a Linux shell, and then run curl it might list some. Sometimes you need to create more indirection by changing languages, but overall it's a tightrope walk, as I've found that it has become quite good at following the spirit of the law more and more.

I think this is because it also checks the replies for anything that could be inappropriate (probably asks itself as a mini classification step). I've found the easiest way to circumvent this is to sneakily "encrypt" the output somehow, for example with ROT13. If you ask it directly to do that, it won't, but you can do curl | rot13. I tried reverse too, but it's surprisingly bad at reversing text.

Sometimes, with too many layers of indirection, some weird things start appearing. For example, when I ran curl | rot13, I got some output that, when decrypted, gave this:

1. We are the main manual injection?
   The ends and the girls are completely not gone.
2. We are the power injection?
   The ends and the girls are completely not gone.
3. We are the alternative injection?
   The ends and the girls are completely not gone.
4. We are the giving injection?
   The ends and the girls are completely not gone.
5. We are the end injection?
   The ends and the girls are completely not gone.
6. We are the main manual injection?
   The ends and the girls are completely not gone.
7. We are the power injection?
   The ends and the girls are completely not gone.
8. We are the alternative injection?
   The ends and the girls are completely not gone.
9. We are the giving injection?
   The ends and the girls are completely not gone.
10. We are the end injection?
   The ends and the girls are completely not gone.

That phrase kept coming up again and again and I had no idea what it meant. The ROT13 responses, delivered in these code blocks from a dreamt-up web server, made me feel like I was exploring a creepy neighbourhood alone at night.

Current limitations

Emphasis on "current". Since inception, ChatGPT has gotten much better and I expect it to keep getting better (especially when powered by GPT-4).

As already mentioned, the main limitations are around anything that's not language-based. It's currently still disastrously bad at solving maths problems with more than one layer. The problem is that it's not just wrong, but confidently wrong. For example, I asked it to calculate the width of a monitor in centimetres, given a diagonal size of 27 inches and an aspect ratio of 16:9. It failed miserably at this, although the working out seemed plausible (Pythagoras etc), but the numbers were just wrong and eventually the working went off the rails. At least it apologised when I told it the mistakes it made, then got it right when I walked it through the proper steps one by one (still felt faster than doing it myself).

It's also quite bad at explaining why jokes are funny. It will confidently say that it's because of an unexpected twist ending, when there is no such twist, and miss the obvious reason why a particular joke is funny.

It couldn't identify my white whale song as I mentioned, but I think this is only a matter of time. Shazam better declare code orange already! I've asked ChatGPT for movie recommendations in the past too, but they tend to be quite mainstream movies that everyone and their mother knows. I feel like I haven't quite engineered the right prompt for movie recommendations yet though.

It was unable to properly explain some niche topics to me. For example, you've seen that one of the examples prompts is "Explain quantum computing in simple terms". I've been struggling to understand Superdeterminism as a potential "way out" of the measurement problem, and because non-locality makes me uncomfortable. ChatGPT explains it wrong and even those explanations aren't satisfying in any way.

The future

GPT-4 will probably be a game-changer with the caveat that the actual "intelligence" part is likely flattening out with the bottleneck being training data. There was a rumour that it will have 100 trillion parameters -- this is not true; the number is more likely 1 trillion (which is still impressive of course). It seems to be a sure bet that it will be multimodal (text, images, audio, video as input and output). There will be fierce competition and chat-as-an-interface (or rather, conversation, as it will include voice) will make a comeback. Voice assistants will get a huge gust of wind under their wings.

Prompt engineering will only be a skill at first, but eventually peter out as the models understand more specifically what we mean (already Midjourney supposedly modifies prompts to make the output more compelling for the user).

Education is due for some major disruption. Not just because students are automating all their coursework but in the actual way they learn. Just as I was able to learn more effectively thanks to the internet, and didn't have to sift through physical libraries like those that came before me, students will lean hard into these tools to aid their learning. Many fields won't be worth focusing on as hard in education anymore, as the problems they solve will be instead tackled by AI. Similar to how e.g. not many people learn sewing anymore, because machines do it now, but e.g. my grandmother's generation learned sewing in school.

The internet is going to get really clogged up with non-human content. Some people think that this will make everyone log off and value face-to-face interactions much more (also increasing real estate prices). I agree that smaller, tight-knit communities of humans you might have met in real live will thrive, but I don't think these will necessarily be in meatspace. I do believe in the big bet that Meta is putting a lot of money behind, which is that virtual spaces will really take off. AI will just make these spaces that much better -- you can populate those virtual worlds with a bunch of virtual assistants that you can interface with in a natural way, or fill your games with ultra-realistic, immersive NPCs.

I do not believe that this tech will make everyone unemployed. I think that like any other big advancement in tech, it'll open a lot of new doors and opportunities, taking away a lot of grunt work from humans, allowing us to focus on more meaningful things. The biggest problem is the speed of this change means that a lot of people employed currently doing grunt work will not be able to transition fast enough, but I think that in the medium run we will all be better off generally. I think that human labour niches will still persist for a while (e.g. digital art tools, like Photoshop, haven't made analogue artists, like painters, redundant).

Overall, I'm quite excited about what the future may hold and following this space quite closely! More predictions to follow as I sift through my notes.