Hi there!
I've been having a blast lately, dusting off my old skills and diving into Atari STE programming with the help of the newest model of ChatGPT, GPT 4.0. You can check out what I've been up to at
https://github.com/diegoparrilla/atarist-silly-demo . It's been a fun side project to get back into Atari ST programming, which I haven't touched since 1990, and give my 68K skills a workout.
With plenty of experience developing both open source and proprietary software, I thought it'd be interesting to see how the latest tech could lend a hand. And that's where things get really exciting – I'm curious to see if advanced tools like Github Copilot and ChatGPT could really make a difference in this little adventure of mine. How cool would that be?
Alright, so here's the scoop: I've been a huge fan of Github Copilot from the start, using it with modern languages, and it's an absolute game-changer when it comes to boosting productivity for developers. But when it came to my journey with 68000 code, well, the outcome wasn't as fruitful, sadly.
I've seen Github Copilot do some nifty stuff with Python - it can spin up simple functions or tests just from a comment that spells out what I want to code. But when it comes to 68000, it's a different story - we're talking barely being able to complete a full line of code.
However, let's not forget its silver lining! Github Copilot truly shines when it comes to annotating your code. Now, this is super important when you're dealing with assembly code. The way Github Copilot is built makes it fantastic at churning out code comments. Want to see what I mean? Just take a look at the source code of the demo - most of those comments were penned by our friend, Copilot
My experience with ChatGPT is a bit of a mixed bag. You see, you can throw pretty complex tasks at ChatGPT and it'll always come back with something...but whether that something is valid is a roll of the dice. ChatGPT can be a bit of a fibber. If it's stumped and doesn't know how to complete a task, it tends to whip up some make-believe solution with quite a bit of gusto. The experts have a name for this – they call it 'hallucinations'.
That's exactly what happened when I asked it to write a simple function in 68000 assembly. Here's the function for context:
https://github.com/diegoparrilla/ataris ... scrl.s#L46. It was for the Mega text scroll on screen, and I needed to rotate a set of 16x16 fonts 90 degrees before getting into the main demo loop. Well, it was a no-go. The function it came up with was basically gibberish – it didn't work and didn't even attempt to tackle the problem.
But, and here's the kicker: I asked ChatGPT to rewrite the function in C, and it came back with a beautiful C function that nailed it! Now, I didn't end up using it because I found it easier to write a pure 68000 function myself. But it was still quite the plot twist!
So here's my theory: ChatGPT could whip up the function in C but not in 68000, and I reckon it's all about the training data. Simply put, there's a truckload more C code out there for the model to learn from compared to the somewhat limited 68000 codebase. This means the model's more likely to give you a decent response when asked about a widely-used language versus a more niche one.
Alright, that's my piece. Apologies for the lengthy response, but this is a subject I'm really into.
P.S.: Fun fact - I got a helping hand from ChatGPT to give my original answer a friendly makeover!