Do We Need a Programming Language Built Just for AI Agents?

Programming languages were made for humans, but now AI writes a lot of code, so we could design a language just for AI. This language would focus on strict rules and clarity, prioritizing correctness over elegance.

A few days ago, someone posted a question online that I could not stop thinking about. The idea was simple: programming languages were built for humans, but now AI systems write a huge amount of code. So why not build a language made just for AI agents? A language with no hidden behavior, clear types, clear rules, and only one correct way to format everything. The goal would not be to be elegant. The goal would be to be correct. I left a comment with some rough ideas, and it made me want to dig deeper. Because the more I think about it, the more I realize this is not just a fun thought experiment. It might actually matter.

Content of the post:

Programming languages were made for humans.

Now AI systems write a lot of code.

Question:

Should we design a programming language just for AI agents?

A language like this would be very simple.
No hidden behavior.
Clear rules.
Clear types.
Clear side effects.
Built-in contracts.
Only one correct way to format code.

The goal would not be elegance.
The goal would be reliability.

Would this help agents write correct code on the first try?
Would it reduce endless fix-compile-fix loops?
Would it make automated refactoring safer?

Or should we instead focus on better models and keep today’s languages?

What do you think?
Is an agent-first language a good idea?

The reply:

What about using pseudocode as the language? There are no direct libraries. Agents can import other agents (although system-level libraries will always be required in the end, but they are not visible to the language).

Each AI agent stores its own memory, and it should never redo a task it has already completed. The output must remain the same.

Alternatively, we could define the output in a specific language and create testing files. I am not sure. The agent would be responsible for performing everything needed to produce the final output.

Or maybe we only need a testing-focused language. We write the tests that must pass, and the agent's task is to satisfy those tests. Once a test passes, it should never be retried again later.

ughhh this is a complex topic, but interesting.

Why Current Languages Are a Problem for Agents

Human programmers learn a language over years. We build intuition. We know when something looks wrong. We understand the quirks of a language because we have been burned by them before. AI agents do not work that way. They generate code based on patterns. They do not always know when they are walking into a trap. And many popular languages are full of traps. JavaScript has type coercion that makes no sense. Python has mutable default arguments that surprise even experienced developers. C has undefined behavior that can do almost anything. When a human hits one of these traps, they debug it, feel a little frustrated, and move on. When an agent hits one, it might generate a fix that looks right but is still wrong. Then it gets stuck in a loop: fix, compile, fail, fix again. A language designed to remove all of that noise could help agents spend less time debugging and more time solving the actual problem.

The Pseudocode Idea

One idea I threw out in my comment was using something close to pseudocode as the base language. No direct library imports. No package managers. No version conflicts. Agents would import other agents instead of libraries. Think about it this way: if an agent needs to sort a list, it does not import a sorting library. It calls another agent whose job is to sort things. That agent already knows how to do it. The output is always the same. The behavior is predictable. The system-level stuff, talking to the file system, making network requests, those things would still exist. But they would be hidden behind a clean layer. The agent writing the main logic would never have to touch them directly. This is not a fully formed idea. But the core of it is interesting: keep the top layer simple and human-readable, and let lower layers handle the messy real-world stuff.

Memory and Repeating Work

Another piece of my comment was about memory. If an agent completes a task and the output is correct, it should never redo that task. The result gets stored. Done is done. This sounds obvious, but it is actually a big deal. Right now, agents can get into situations where they repeat work they already did because they lost track of what was finished. A language or runtime that makes completed tasks immutable could solve this at a very low level. You would not need the agent to be smart about it. The system would just not allow it.

What About Test-First Languages?

The third idea I had was maybe the most different from the others. What if we do not design a language for writing code at all? What if we design a language for writing tests? You write the tests that must pass. The agent's job is to make those tests pass. Once a test passes, it is locked. No one can break it later without explicitly removing the lock. This is a little like test-driven development, but taken further. The human is not writing the code. The human is writing the requirements in the form of tests. The agent figures out everything in between. This might be closer to how we should think about working with agents anyway. We are not co-pilots sitting next to them as they type. We are people who define what "done" looks like, and then we let the agent figure out how to get there.

Or Should We Just Build Better Models?

The honest counterargument is this: maybe we do not need a new language at all. Maybe we just need better models. Models that understand existing languages more deeply, that catch their own bugs, that know when something looks suspicious. There is something to that. A good enough model should be able to handle messy languages the same way a senior engineer does. And keeping things in existing languages means no extra tooling, no new ecosystems, no learning curve. But I think both things can be true. Better models help. And a cleaner language also helps. They are not competing ideas.

Where This Goes

I do not think anyone is about to release an "agent-first programming language" next month. But the conversation is worth having now, before AI-generated code becomes the default way most software gets written. If we wait until then to ask these questions, we will already be stuck with whatever we have. And fixing things later, when millions of codebases depend on the current setup, is much harder than thinking it through early. The question is not just what language agents should use. It is what does "writing software" even mean when the person writing it is not a person? That is the thing I keep coming back to.

~ Lasan

Link copied to clipboard!

Continue Reading

You've reached your limit of 4 free articles. Subscribe for free to continue reading unlimited articles, designs, and resources. Learn by reading, studying, and exploring ideas with Lasan. Join the community and keep learning.

Success! Redirecting...

No spam, unsubscribe anytime. By subscribing, you agree to receive updates.

Subscriptions are secured by Autheona. We regularly remove invalid or spam email accounts, so please sign up with a real email address.