Last night I did the last “Hard” exercise on the Java Script track which was “change: which involves giving change in the smallest number of coins. I felt like that didn’t really count so today I did “robot simulator” too. That one involves having a robot with position and direction and giving it instructions on which direction to point, and to advance. I have 20 exercises left. I think I am two ahead now. If I get to the end of the track before the end of 100 days I will fill in with exercises from the Python track. It is half done so I have plenty.

For the Gameboy front, I decided I wanted to add background music. I thought my rendition of Jingle Bells was terrible. I wonder if AI can do it better. So I went to Ollama and asked. The first model would only give me a verse, but the second model did give me the whole song transcribed to notes of frequency, duration. I pulled out the book, got things working from the continuous sound sample and it sounded terrible.

So through the power of spreadsheets I transcribed it to code for my BASIC interpreter. It sounded terrible there too.

Then also through the power of spreadsheets I transcribed my old version to the Gameboy code and ….. It sounded exactly like it did before. I tried changing the note frequencies on the AI one without improvement. So I guess the AI just gave me horrible noise.

Now you can click the movie and hear for yourself. I wonder if the first AI’s answer was better (I checked more pleasant but didn’t sound a thing like Jingle Bells to me). Remember, that was supposed to be “Jingle Bells”.