We’re at the earliest days of a seismic shift in technology (and power consumption at that) where we have AI that is capable of writing applications in many of the most common languages… well, mostly capable. It will get better with training over time. However, what happens if AI is directed to create a new programming language that is optimized for AI to write in, and human readability is secondary.

To date, logic & syntax of programming languages have been the domain of human engineers. We celebrate the engineers who created the initial Unix OS and C language, but what happens when AI grows sufficiently in reasoning to create a new, optimized language for its usage. Are we ready for a time when AI can write efficient code with fewer vulnerabilities than a human, but it becomes difficult for us to read & validate? Will we have safeguards in place so that a human can’t just deploy code that seems to do what they want, but they cannot verify?

Just a fun thought experiment.