As everyone is nowadays, I am very interested in AI, especially open-source LLMs like those provided by Ollama and free-to-access platforms like Meta AI (meta.ai).
I decided to try interacting with Llama 3.1 to create a project from scratch with the following characteristics:
- A programming language that I am unfamiliar with
- Programming techniques that I have never used (or haven’t used in a long time)
- Statically typed
- Compiled
- General purpose
Can you guess the language? ;)
Objectives:
- To do something useful using only generative AI
- To learn a new language
- To learn new techniques
- To see if I can actually build something outside of my comfort zone
Early impressions:
- I feel like it would be more difficult if I didn’t know how to program at all. I’m still fine-tuning the creation of prompts. Sometimes, when I try to give examples to the model, it uses them as context and creates redundant algorithms based on what I said. I need to study more about how to create effective prompts.
- Generating tests isn’t working very well for me right now. This is because it’s the first time I’ve tried to generate an entire codebase using only artificial intelligence. So the tests (doing TDD) don’t cover edge cases. But this is again my issue… I need to improve the prompts.
- Getting the “recipe” of what I have to implement has been great so far. I basically drafted the idea from start to finish and got a fairly detailed path to follow that allowed me to easily understand the necessary components of what I want to achieve. With the overview, going into detail has been easy for me… it’s like double-clicking on a topic.
- I feel like I’m really learning the language, even though it’s only been a few days. I feel like I’ve learned a lot about the language’s syntax and why I had to use it in a specific way. I’ve also been trying out new, shorter, and more tweaked syntax.
- I felt confident even though I didn’t know anything about the language/domain. It was a very different feeling from googling. Let’s say that I felt much more focused on asking the right questions than just creating a base question and then digging around to find a good answer, if one even exists. That process reminded me of Antirez teaching himself distributed computing to build Redis.