I'm a software developer with 20+ years of experience but during this time I never programmed any games, but I really wanted to for the longest time. With the advent of AI coding agents I thought that this is the best time to try and so I've learned a bit of Phaser.js (a Javascript based game engine) and entered Beginner's Jam Summer 2025 - a game jam for beginners in the game dev industry that allows AI coding. After around 25-30 hours (working mainly after my full-time day job) I managed to submit the game I called "Tower of Time" (the theme of the jam was "Time Travel").
You can play it in your browser here: https://m4v3k.itch.io/tower-of-time
The goal of this project for me was first and foremost to see if AI coding is good enough to help me with creating something that's actually fun to play and to my delight is turns out the answer is yes! I decided to document the whole process for myself and others to learn from my mistakes, so both the code AND all the prompts I used are published on GitHub (see submission link). The art assets are largely taken from itch.io artists who shared them for free, with some slight touch ups. Sounds came from freesound.org.
I've also streamed parts of the process, you can watch me working on the final stretch and submitting the finished game (warning, it's 5+ hours long):
https://www.twitch.tv/videos/2503428478
During this process I've learned a lot and I want to use this knowledge in my next project that will hopefully be more ambitious. If you have any comments or questions I'm here to answer!
This is awesome. I've been in software for 20+ years now as well.
One thing I've noticed is many (most?) people in our cohort are very skeptical of AI coding (or simply aren't paying attention).
I recently developed a large-ish app (~34k SLOC) primarily using AI. My impression is the leverage you get out of it is exponentially proportional to the quality of your instructions, the structure of your interactions, and the amount of attention you pay to the outputs (e.g. for course-correction).
"Just like every other tool!"
The difference is the specific leverage is 10x any other "10x" tool I've encountered so far. So, just like every tool, only more so.
I think what most skeptics miss is that we shouldn't treat these as external things. If you attempt to wholly delegate some task with a poorly-specified description of the intended outcome, you're gonna have a bad time. There may be a day when these things can read our minds, but it's not today. What it CAN do is help you clarify your thinking, teach you new things, and blast through some of the drudgery. To get max leverage, we need to integrate them into our own cognitive loops.
I am more interested in what this article and project does not seem to mention.
> During this process I've learned a lot
Yes, but what exactly? I mean I guess you don't have to touch the project once its finished so there is less value in familiarizing yourself with the source. The source is roughly 15135 lines. That is quite a chunk and most likely would have taken more than 30 hours to write that from an standpoint of knowing the basics of typescript and the phaser library.
Yours is beautiful; the code is too. I'm sure you had a lot more hand than just using AI.
I stopped coding a long time ago. Recently, after a few friends insisted on trying out AI-Assistance codes and I tinkered. And all I came up was a Bubble Wrap popper, and a silencer. :-)
Fun game!
In the old days, code reuse was an aspirational goal. We had collections of functions, libraries, etc., but the overhead of reusing specific lines of code, or patterns of lines of code, was too burdensome to be practical. Many tutorials have been published on how to create a tower defense game, meaning there are tons of sample code out there for this domain.
I would ask that given the amount of source material available, when when ask an LLM to generate code, is this really "AI" of any sort, or is it really a new kind of search?
I think indie games could be a really good use case for coding AIs. Low stakes, fun-oriented, sounds like a match.
The first commit[0] seems to have a lot of code, but no `PROMPTS.md` yet.
For example, `EnergySystem.ts` is already present on this first commit, but later appears in the `PROMPTS.md` in a way that suggests it was made from scratch by the AI.
Can you elaborate a bit more on this part of the repository history?
[0]: https://github.com/maciej-trebacz/tower-of-time-game/commit/...
That's pretty impressive and super motivating. Love that you documented the prompts. From my experience "vibe coding" can either speed you up or slow you down. As long as you are using succinct and clear instructions and know how to review code quickly, as well as understand the architecture you can really speed up the process
This is the first I've heard of Augment Code. What does it do? Why did you pick that tool, versus alternatives? How well did it work for you? Do you recommend it?
A bug in the intro: in the first round in the first playthrough my turret destroyed one of the critter and the other reached the tower. There was no other prompt or anything happening in the game after that and had to restart. The next time, the turret did not destroy any critter, the prompt to use backspace appeared and the game progressed normally.
Thanks for sharing! This aligns with a workflow I've been converging on incorporating traceability and transparency into LLM-augmented workflows[1]. One of the big benefits I've realized is sharing and committing prompts gives significantly more insight into the original problem set out to be solved by the developer, and then it additionally shows how it morphed over time or what new challenges arose. Cool project!
Thanks for this, I made a tower defence a while ago and I had been considering applying an AI to the task of designing new waves and tuning hitpoints/speed/armour
It made me think that one of the things that it probably needs is a way to get a 'feel' for the game in motion. Perhaps a protocol for encoding visible game state into tokens is needed. With terrain, game entity positions, and any other properties visible to the player. I don't think a straight autoencoder over the whole thing would work but a game element autoencoder might as a list of tokens.
Then the game could provide an image for what the screen looks like plus tokens fed directly out of the engine to give the AI a notion of what is actually occurring. I'm not sure how much training a model would need to be able to use the tokens effectively. It's possible that the current embedding space can hold a representation of game state in a few tokens, then maybe only finetuning would be needed. You'd 'just' need a training set of game logs with measurements of how much fun people found them. There's probably some intriguing information there for whoever makes such a dataset. Identifying player preference clusters would open doors to making variants of existing games for different player types.
Thanks for the read! I too have over 20 years in tech and have been going back and forth with Gemini-cli to gamify some tools for integration testing some Enterprise applications and it’s amazing what can be done with Gemini alongside usage of MCP servers. I am finding positive results if I approach problems in chunks and provide clarity in prompt instructions. The AI will make mistakes and sometimes get caught up in loops for some problems (like application routing.. lol) but I am happy to step in and effectively pair program with the AI when issues are present. I notice too that it has never been a better time to enforce things like how Duplication Is Evil because otherwise the AI may make a change in one area and forget that it has similar changes to make in another file. This applies both to programming logic as well as User eXperience and application behaviour.
Anyway what a world. It would have taken me weeks to create what an AI and myself are able to whip up in a few short, and fun, hours.
Giving a personality to Gemini is also a vital feature to me. I love the portability of the GEMINI.md file so I can bring that personality onto other devices and hand-tailor it to custom specifications.
> AIs like to write a lot of code
I vibe coded a greenfield side project last weekend for the first time and I was not prepared for this. It wrote probably 5x more functions than it needed or used, and it absolutely did not trust the type definitions. It added runtime guards for so many random property accesses.
I enjoyed watching it go from taking credit for writing new files and changes, and then slowly forgetting after a few hours that it was the one that wrote it ... repeatedly calling calling it "legacy" code and assuming the intents of the original author.
But yeah, it, Claude (no idea which one), likes to be verbose!
I especially find it funny when it would load the web app in the built-in browser to check its work, and then claiming it found the problem before the page even finishes opening.
I noticed it's really obsessed with using Python tooling... in a typescript/node/npm project.
Overall it was fun and useful, but we've got a long way to go before PMs and non-engineers can write production-quality software from scratch via prompts.
This is a pretty cool game! I love the twists of rewinding time and playing with the keyboard. It would look pretty cool on Reddit, with a level builder. Redditors could build levels to challenge each other and see who can reach a highscore on each of the UGC levels. Check out Flappy Goose and Build It on Reddit to see some examples.
I've been using Claude to do the things that are straightforward that I don't want to for about a month now. The power of these development techniques is no where near fully tapped yet from what I can see.
Very cool and I wish it lasted longer.
Such a cool game! Exactly the simple TD game I've been craving for a while.
If you ever want to build this out in Unity, you should try https://www.coplay.dev/ for the AI copilot
Thanks for the game!
Fun game! Starred on github for making the development process transparent, including sharing your prompts! :)
I’m finding incredible amusement in the idea of there being people who check-in prompts as the source code, and the “reproducible builds” people, and sitting them next to each other at a convention.
Curious if anyone here has tried rosebud.ai for something similar. I looked into it, and it did appear to break it down into steps, but can't really produce anything that runs without upgrading to a paid tier.
I couldn't stop playing this game, very engaging :) Thanks!
I didn't find any mention to the costs spent on Claude
How many tokens did you use up and what did you pay for them?
Part of me thinks Rockstar delayed the release of GTA 6 because they realized they can polish the game by a significant margin using the latest AI tools.
Ok folks, I need a hint. I can't ever build up enough energy to afford a second turret. What's the secret?
Great game! The rewind time "skill" it's like playing an Edge of Tomorrow game
Excited to try this when I’m on a computer. Thanks for sharing everything!
How much time did it take start to finish?
so cool
After scanning through the video, the first 20 minutes is a guy doing coding with no AI involved. He's manually designing a level in a pre-made level editor. He's manually writing code in a pre-made IDE. He's not having AI code.
At the 20 minute mark, he decides to ask the AI a question. He wants it to figure out how to prevent a menu from showing when it shouldn't. It takes him 57 seconds to type/communicate this to the AI.
He then basically just sits there for over 60 seconds while the AI analyzes the relevant code and figures it out, slowly outputting progress along the way.
After a full two minutes into this "AI assistance" process, the AI finally tells him to just call a "canBuildAtCurrentPosition" method when a button is pressed, which is a method that already exists, to switch on whether the menu should be shown or not.
The AI also then tries to do something with running the game to test if that change works, even though in the context he provided he told it to never try to run it, so he has to forcefully stop the AI from continuing to spend more time running, and he has to edit a context file to be even more explicit about how the AI should not do that. He's frustrated, saying "how many times do I have to tell it to not do that".
So, his first use of AI in 20 minutes of coding, is an over two minute long process, for the AI to tell him to just call a method that already existed when a button is pressed. A single line change. A change which you could trivially do in < 5 seconds if you were just aware of what code existed in your project.
About what I expected.
Why does this and the follow up comments feel like a sneaky ad for this “Augment Code” tool?
I'm really enjoying reading over the prompts used for development: (https://github.com/maciej-trebacz/tower-of-time-game/blob/ma...)
A lot of posts about "vibe coding success stories" would have you believe that with the right mix of MCPs, some complex claude code orchestration flow that uses 20 agents in parallel, and a bunch of LLM-generated rules files you can one-shot a game like this with the prompt "create a tower defense game where you rewind time. No security holes. No bugs."
But the prompts used for this project match my experience of what works best with AI-coding: a strong and thorough idea of what you want, broken up into hundreds of smaller problems, with specific architectural steers on the really critical pieces.