Thoughts on the Future of Programming[in progress]
Programming languages of the future will be dazzling, but largely invisible inventions. On the other hand, ordinary people will engage in ‘programming’ as part of their daily life without any awareness that they are doing so. We’ll swim through a sea of code, much of it written by humans and much of it written by software with our involvement.
Some of what I’m about to describe as the future of programming won’t look much like programming today, so I’m looking at it with a broad lens. Programming is telling a computer what to do - delivering a set of instructions or directives that can be incredibly structured or very basic. The smarter the system is, the less you have to tell it for it to understand what to do. As our systems get smarter, we’ll have to do less work to articulate our ideas to them. (Although maintaining the back-end of those systems may be harder than ever!)
The high-level programming languages we use in the future will be, like the ones we use today, a sort of controlled simulation or virtual space, not unlike a videogame, except their purpose is to enable us to create software. As Vikram Chandra says in his book “Geek Sublime”:
“All modern high-level languages provide the same ease of use. I work inside an orderly, simplified hallucination, a maya that is illusion and not-illusion—the code I write sets off other subterranean incantations which are completely illegible to me, but I can cause objects to move in the real world, and send messages to the other side of the planet.”
##Programming, Predictions and Preferences
There’s much thought and debate among programmers about the current languages and their competing virtues, but comparatively little in the way of predictions, and even less about the radical possibilities of what programming might be in the future. Part of this is because it’s incredibly hard to predict where software will go, being driven by the slow, arbitrary process of human design and not, like the world of hardware, by breakthroughs in basic research into physical materials which can radically accelerate things.
Predicting whether Ruby or Python will be the more important language in twenty years is anyone’s guess, though many programmers will be able to muster strong opinions on it. It’s more in the domain of futurists and other speculators to envision what a brand new programming language would look like twenty years from now, in 2035.
My best guess is that while in general we’ll see certain trends - highly abstract, easier to write and read, and augmented by non-text based interfaces - we’ll also see a flowering of what I’m calling microlanguages: domain-specific, environment-aware and tailored to special purposes. I also think a prospect long in coming will someday arrive: as the Internet proliferates through the globe and homegrown Silicon Valleys spring up, we’ll see a truly global variety of approaches to building software, and this will eventually include a challenge (or at least a viable alternative) to the dominance of English as the language interwined with our computer code.
So while this is a speculative essay thinking ahead from 2015 to imagine what programming languages might be like in Internet-eons - decades from now - it will be mostly an extrapolation of certain trends I already see.
Why is all this important? As our world is shaped by software, it in turn shapes us, and the ability to write software will continue to be a key skill. Programmers become very attached to the languages they use; they become a vehicle for their ideas, and after long use merge in some deep way with the psyche. They even become evangelical in their zeal for a language, and engage in interminably long, quasi-theological debates over their respective beauty or ugliness.
Most of this programmer debate consists of the ways in which languages structure themselves, and in so doing support or discourage certain design paradigms. The way you organize the code in a large application becomes very important as our limited human brains spool out handiwork so complex we can hardly understand what we’ve ourselves created, much less what everyone in a big team has.
Like natural languages, the programming languages you know define how you are able to think about a problem, the parameters and boundaries of it, and provide you with the vocabulary to formulate an approach to solutions and the building blocks to create worlds. But natural languages are much less similar to each other than programming languages, and much more slow in their evolution. Programming languages, despite floating as they do in the lofty domain of software abstraction, are still shaped by hardware capabilities. Higher-level abstractions are largely made possible through improvements in processor speed, since the higher the abstraction, the more inefficient the code as it relies on giant, hidden-away libraries to interpret and translate the high-level code into something a machine can understand directly.
##Ultra High-Level Languages
This brings us to my first prediction: in general, programming languages will be higher-level - especially as microprocessor speeds continue to increase, giving slack to the language designers. The details of how the machine works will increasingly be abstracted away or managed automatically. Comparing older programming languages with newer ones, this seems to be the clearest trend and the safest prediction to make: programmers will continue to find ways to encapsulate more computing into more compact syntactical expressions, thus enabling them to write in a line of code what would have taken hundreds of lines.
So what would an ultra high-level language look like? It would resemble natural language closely, but it would still be a structured, highly formal kind of language. It might be, like a programmer friend freaking out about future extinction once suggested to me, that something like UML (Universal Modeling Language) would be used to design entire applications. Or it could be something even more natural-language-like, with some kind of intermediary interpretation to smooth out the human tendency towards ambiguity and kinks of the natural language. An ultra high-level language could be graphical, or a mixture of graphical and text-based.
What is a “microlanguage” anyway? Largely I’m talking about something there’s already a concept for - domain-specific languages, in contrast with general-purpose ones. What I’m trying to communicate here (aside from something self-indulgently catchy) is a word with a sense of a little something extra - not just a limited purpose language but one that fits itself to its intended use as a wine opener does opening a bottle of wine.
- Has a simple syntax but a powerful set of rules.
- Is error tolerant and flexible.
- Has a vocabulary more important than its syntax.
- Is a big pile of syntactic sugar.
- Is accessible and useful to not just people whose primary activity is programming, but to computer-literate professionals from the problem domain it is designed for.
- A microlanguage has not just a specific domain, but a specific demographic in mind.
By this definition, CSS is a microlanguage for web designers. So is jQuery. They’re both fantastically useful and tailor-made to the environment they fit.Another key example is Processing, the language (based on Java) that was created for artists and designers. What I’m calling microlanguages blurs the notion of a programming language with that of language libraries and frameworks. But the line is already blurred.
It’s not hard to imagine the arrival of something like “roboQuery,” a dead-simple microlanguage for programming robots, or “House Basic,” a microlanguage for programming your home devices once your fridge, toaster, coffee maker, stereo system, and so on are all online. These microlanguages will solve several purposes at once: they’ll serve as a standard interface for various manufacturers to agree on and repurpose as a way of interacting with their products; they’ll give consumer hobbyists the ability to take control of their devices; and they’ll empower technology professionals (many of whom will be more in the line of customer service and product maintenance than full-on product design) to create solutions with small bits of efficient, readable and repurposable code without having to slow down too much each time they work with a new component.
##Natural Language Interfaces
Speaking of houses full of interconnected devices, imagine one morning a few years from now, you wake up and tell your kitchen to start making you coffee and toast. In other words, before your eyes are open, you’re programming.
Siri (the iPhone’s natural language based assistant) and other voice interfaces, like customer service phone trees, are still more of a source of frustration and humor in 2015 than anything else - they’re sometimes a gimmick, sometimes a way for corporations to shave off some dollars on customer service. They’re still not very useful.
But spend a little while with Siri and you can start to see the dance we’ll be doing in the future. You figure out quickly what the limitations of the interface are, and begin to work around it. You see that some kinds of commands are responded to with the desired effect while others make the system choke.
Of course, we won’t think of this as programming. We’ll think of it as interacting in a natural way with our computers. But at some point, the translation of speech into machine code is going to lead to unexpected possibilities, even be exploited. You can even imagine someone learning to hack a computer system using their voice, much in the way we hack other people by conning or lying to them. Being very literal-minded, computers may be particularly susceptible to this kind of manipulation.
It’s inevitable, given the number of people with a desire to program and the difficulty in learning how to do so, that visual languages emerge. An obvious example today would be Scratch, a visual programming toolkit developed by the MIT Media Lab. But I would offer that visual programming languages are popping up all over the place.
They’re showing up in web services like If This Then That (IFTTT). They’re even present in content management systems like WordPress and Drupal. Drupal especially, with its endless configuration pages and dropdown menus, offers such a vast control panel to the user that the usability advantage it offers over dealing directly with code is questionable. And yet for many people, even the most cluttered and byzantine graphical interface will be preferable to doing the simplest kind of programming.
For this reason, visual languages will come in and out of fashion but consistently be returned to and advanced. And a quick note on what I mean by visual languages, to distinguish them from tools that generate code, which developers are often in and out of all the time. A web developer might draw an icon in Adobe Illustrator, export the SVG file and then manipulate it directly in code. That’s not visual programming, or at least that isn’t mainly what I’m talking about here. I’m picturing visual programming as something that directs the flow of a program: the layout of a page, the behavior of an element, the logic that makes this happen if and when that other thing happens.
Responding to events, creating conditional logic, transmitting data from place to place. These are the routine tasks of programmers, and they can and will be done in various ways by visual toolkits. And who is to say that some novel, even beautiful way of doing these things won’t be invented someday? Some neon dream of diagrams, looping lines, glowing circles, interspersed with letters and numbers, that will be both abstract enough to be powerful and tactile enough to facilitate deep engagement.
##Breaking With English
If programming languages truly grow to resemble natural languages, it’s inevitable that Spanish, Arabic, or Mandarin-speaking engineers eventually create tools that allow them to use a programming language that is closer to the language they speak natively and think in - or, other forms of visual communication they are comfortable with and fluent in.
This could mean a language that lets you write things from right to left or up to down; use a familiar character set; or use a special set of characters or ideograms to write your program. It could also be a mixture of all these things: diagrams to wire up logic, pictograms to create templates, and a non-English based language that still made use of ASCII characters for interoperability.
##AI Interpreted Languages
It takes a lot of work to take an idea you have and render it into software. What if an artificial intelligence could help you with the translation?
There’s an early prototype of this kind of thing in existence today, in the form of code generators - ‘chefs’ that help you cook up the beginnings of a web application for example. At present, this is a one-way process, and once you start tweaking the code, you can’t go back to your friendly chef and ask for extra seasoning.
Artificial Intelligence is a huge topic and speculation on it goes beyond the scope of this essay. But even speaking conservatively, AI (here mostly meaning machine learning) will surely yield improvements in natural language processing and interpretation.
Compilers already do very sophisticated things today, juggling complex syntactical expressions and transforming them into something very different and lower-level. An AI interpreted language could concievably be like a super-compiler, making ever higher-level, more natural-language like, sloppier, messier, and stupider human code into something with the precision a machine can understand.
When a human is translating a garbled piece of text, they’re doing some very subtle work, making guesses and knowledge-based inferences. Asking a computer to do the same is a tall order, but it’s not inconcievable that AI could improve the aspect of programming that is still the most frustrating - losing your flow, fixing bugs because you got some small detail wrong.
An AI interpreted language would smooth over those bugs, making judgments about the details based on what it could percieve about the larger pattern.
The amazing thing about software innovation is that it exists independently of hardware innovation. What if Moore’s Law - the much-lauded tendency of microprocessor speed to increase exponentially every year or so - ended today and become instead, Moore’s Wall - a hard physical limit that ended Moore’s Law forever? It would certainly affect the world of software, but if anything it would make it more important, not less.
If improvements in processor speed and other hardware components slowed to a crawl or halted completely, software innovations could still continue indefinitely. Hardware defines the available ingredients, but the chef can still come up with new dishes even if magical new components stop appearing in their cupboard.
In any possible future scenario, from the most optimistic to the least, programming will play an important role. I’m curious to see what that will look like, and the only thing I’m really certain of is that some of my ideas about it will be naive and turn out to be totally wrong - and despite that, or because of it, it will be even more interesting.