The Second Best Way to Pick Your Next Programming Language

Experienced developers often say that the best way to pick your next language to learn is to pick a project you want to ship and pick the best language for the job. This isn't wrong, but beginning developers often respond that the only "project" they have in mind for now is to learn more about programming generally and become a better developer overall. If this is your goal (and it's a good one!), then the "best way" to pick your next programming language just isn't applicable. But I think you'll appreciate the second best way.

Using this guide

If you're just starting out with programming, it's helpful to understand why there are so many languages in use. Every language has an underlying philosophy, a set of opinions about the best way to solve problems or the best way to solve a certain subset of problems. A good programmer usually has their own opinions on how to solve most problems, but is willing to put away their preferences when there is a particular tool for a particular job.

If you're just starting out in software development or if you only know one or two languages, your goal should be to develop your own judgement through a variety of experiences, and picking a variety of programming languages is a good way to do that. The nice thing is that you don't have to learn every programming language--languages tend to fall into certain categories, and once you've learned one language from a category, you naturally develop a general understanding of the thinking behind that category as well as a general understanding of what that category of language is best used for.

My goal for this guide is to introduce a small, nonexhaustive variety of languages based on a few different common features of programming languages: type system, orientation, and abstraction level, in no particular order. Within these categories, I've listed what I think are a few prime examples that have these features. If your goal is to become a better programmer, you should try to choose programming languages from different categories to get as broad an experience as possible until you figure out for yourself what features you prefer in a programming language.

While I can't separate this post from my own opinions, I will make an effort to represent all sides, as I understand them, fairly. As a fair warning, since I am targeting beginners, I will be greatly (over?)simplifying a lot of advanced concepts, so take any technical nuances with just a grain of salt.

Abstraction Level

Abstraction is a programming concept that refers to the need to hide details about the mechanics of a computer in order to focus on the logic specific to a program. At a very low level of abstraction, a programmer will have to have to understand things like how numbers are stored in binary or how information is organized in a computer's memory or the differences in processor architecture, even down to different kinds of circuits and processor architectures. This guide won't deal with languages at that level because beginners don't generally deal with concepts at a very low level. Abstraction level is a matter of degree, though, and working with lower-level languages can gently introduce low-level concepts in an easier way.

Lower-Level languages

I'm specifically using the word "lower" because none of these are truly low level languages in the way that something like Assembly is. These are more like mid-level languages or flexible languages that can optionally use some low level features. They are often used in situations where developers need to micromanage hardware or performance, such as:

  • writing device drivers

  • robotics, especially relating to particularly small and specailized computers

  • writing compilers or runtimes for other programming languages

  • malware or information security research

C/C++ are relatively old languages and relatively permissive. It's easy to write what you want, and it's easy to write bugs. It's possible to make granular optimizations in low-level languages, and it's a good opportunity to learn about computer hardware, if that's what you're into.

Rust is a low-level language with a more restrictive compiler (or a more helpful compiler, depending on how you look at it). It's especially helpful in dealing with memory safety, which helps with security.

Higher-level languages

Higher level languages tend to trade performance for simplicity (from the developer point of view. These languages usually come with a lightweight virtual machine to standardize the way it runs on physical machines. (As they say, "write once, run anywhere.")

You don't have the flexibility that you have to control individual instructions or memory locations in the way that you do in a lower level language, but they're often better for managing larger code bases without worrying about the kinds of bugs that inevitably creep in because of developer (human) error.

Higher-level languages are often used for:

  • web apps

  • mobile apps

  • application servers

  • desktop apps

  • cross-platform development

Higher level languages are wildly diverse in the way that they model a program, from Java, the grandfather of cross-platform development to Python the beloved scripting language. (Basically every language on this page is a higher-level language, so I'm not going to elaborate here).

A word on performance concerns

It's easy for beginners to get way over-focused on performance. There are three problems with this kind of thinking.

One is, as they say, "premature optimization is the root of all evil." You might be tempted to try to adopt a lower level language with the intention of taking advantage of every performance optimization you possibly can, for the fastest possible version of the app you wanted to write. The next thing you know, you have a small bug that's causing performance problems, but you can't fix it because your super-optimized code is very difficult to read.

The second problem is that your super-optimized code probably won't actually have too many optimizations that you wouldn't be taking advantage of in a higher-level context. I'm telling you that the people who write compilers and runtimes are pretty smart and have a lot more help than you do. They know about these optimizations, and, most of the time, they're already taking advantage of them.

The last is YAGNI. You Ain't Gonna Need It. So let's say you're trying to write the most memory-optimized desktop app ever. Okay. But can you remember the last time you ran out of memory on your desktop? If it's been a while, you may be sacrificing for something like low-memory usage when you won't get any benefit.

So unless you're doing something pretty specialized, you probably won't benefit from focussing on performance too much too early. On the other hand: do what you want! Learn what you want! If you're interested in learning about performance, then pursue it to your heart's content, and using lower-level languages is a good way to do that. Just don't feel like your programs will necessarily suffer if you pick a higher-level language.

Type Systems

Of all language features, type systems seem to generate the hottest takes. I certainly have my preferences like everyone else, but know that there are exceptional developers on all sides of the debate. There are so many dimensions to the type system discussion from type dynamism to type strength to exhaustiveness to inferencing, and as an experienced developer I'm still learning new ways of thinking about types every day, but for the purposes of this article, I'm going to focus on a beginner-friendly and salient attribute of type system: dynamism vs staticism.

Static typing

A language is statically typed if the type of a value is determined at compile time instead of runtime. This requires more from the developer up-front before a program will run at all, but it can prevent certain types of runtime exception.

For example, if I have a string x in a statically-typed language, and I later try to assign a number to x, the program will not run at all because it will not compile. This is considered a feature because it prevents developers from accidentally using certain kinds of data--the compiler error prevents certain kinds of bugs from going into production. It can also have performance benefits because the compiler can make optimizations if it can make specific assumptions about the underlying data.

By some metrics, Java is the most popular statically-typed language. C#, Java's close cousin, is mostly statically typed but can allow dynamic types through the rarely-used dynamic keyword. TypeScript, ReScript, and Elm are all examples of languages which apply a statically typed compiler to produce otherwise dynamically typed JavaScript.

Dynamic typing

In dynamically typed languages, no such constraint exists which prevents a piece of code from running at all. Rather, types of values are determined at runtime.

Proponents of dynamic typing argue that this makes code easier to change and refactor because there are no type declarations that have to be changed. Proponents of static typing argue that this makes it harder to change with confidence because without type declarations, you can be less sure the change you made won't result in a runtime error.

In general, dynamically typed languages tend to be used for smaller pieces of code, which is why there's a strong overlap between dynamic languages and languages called scripting languages--JavaScript, Bash, PowerShell, etc. However, devotees of dynamic typing may also decide to write larger programs like web services in dynamically typed languages. Python and Ruby are good examples of this.

Orientation

In the beginning, there was procedural code. And the code was bad.

Object orientation and functional orientation are two language features that facilitate better code organization to keep even extremely large codebases manageable, maintainable, easy to reason about, and easy to change without introducing subtle bugs. After so many years, the holy wars between the two philosophies are starting to wind down and the lines between functional languages and object oriented languages are starting to blur as languages continue to add features without regard for ideological purity.

What's great about learning both philosophies is that they can each be judiciously applied even in languages that aren't geared toward them--i. e., you can use functional principles in object oriented languages and vice versa.

In order to learn both philosophies, though, I recommend learning languages strongly geared to one side or another as well as languages that facilitate both very easily.

Object orientation

Object orientation is something you may well already be familiar with. It is well supported in many of the most popular languages such as Java, JavaScript, C#, C++, and Python.

If you have done a Computer Science undergraduate program, you're probably very familiar with object orientation. You've probably studied and implemented the "Gang of Four" object oriented design patterns. Perhaps you can also rattle off the principles of S.O.L.I.D. code or the four object-oriented principles.

Obviously, if you're not familiar with object orientation at all, there's a lot to learn, and while I won't cover all of that here, I hope I've given you enough to start your search because object orientation, as you will see on any job search site, is incredibly employable and you'll need it if you want to have a career, but don't sleep on the next section as the world slowly opens up to the functional orientation paradigm.

Functional orientation

If you're a beginner, the functional paradigm may be something you're not familiar with, but it's an excellent arrow for your quiver. You might be familiar with the conventional wisdom that "learning Haskell will make you a better programmer" because Haskell is a sort of an extreme example of languages that are highly restrictive and won't compile without conforming to a set of best practices.

While object orientation tries to prevent state management bugs by encouraging encapsulation, functional programming languages encourage (or outright require) immutability; there can be no state management bugs if a given value can only have one state ever. (This has an added benefit of automatic thread-safety.)

Further, while object oriented programs tend to be composed of procedure-like statements, functional languages encourage programmers to write in expressions instead, which often results in shorter, cleaner functions.

Lastly, functional programming languages tend to have more exhaustive type systems geared toward making invalid states unrepresentable. They do this by encouraging the use of enum-like structures to define all possible states, which the type-checker can then use to ensure that all possible states are considered in branched logic.

Today, many object oriented languages have added various functional programming features (often in the form of syntactic sugar on top of existing features), but I would recommend learning at least two functional-first languages that don't support object orientation at all. Some good examples include Elm, Haskell, LISP, and ReScript.

Peanut butter cup languages

Most languages are either primarily functional or primarily object oriented, but there are also a few unopinionated languages designed with both paradigms in mind. You can use these languages in either way, or you can use both feature sets and combine them in whatever ways make the most sense for your program.

You may find that you really like this Swiss-army-knife approach, or you may decide a language shouldn't try to be a Jack-of-all-trades. (I titled this section based on an old joke about the Scala programming language that I feel really captures both sentiments.)

Scala and F# are both firmly in this category, with honorable mention to Kotlin and strict-mode TypeScript. I would not, however, include languages like Java and C# in this category because they are both so primarily object oriented; while they support functional programming, they do little to encourage exploration in that area.

Syntax

This is a minor point, but it's worth working with different syntactic styles. To generalize broadly, programs are made up of blocks which are made up of lines, and blocks can be surrounded by braces or offset with indentation, and line ends can be marked by semicolons or inferred from newline characters. There are some exceptions (LISP, for example, is sometimes affectionately called Lots of Infuriating and Silly Parentheses because, well, it delimits everything with parentheses), but you'll find most languages lend themselves to one of these patterns.

The distinction can be a little blurry. Scala and JavaScript, for example, have optional semicolons, and many a holy war has been fought over whether it's best to have them there. I would recommend learning at least one language that doesn't have any significant white space and one language that uses significant whitespace primarily.

What you should take away from these different experiences is that code readability is subjective. We can poll and study and let the market decide, but at the end of the day, you might find a language very clear and easy to read that I think looks like gibberish, or vice versa. You may join the syntax holy wars, or you may decide you fundamentally don't care. Clearly, there's room in the market for a variety of preferences.

C-like

C is an old language and it's syntax choice have influenced other languages which prefer visible delimiting characters over whitespace. Therefore, in many languages, blocks are surrounded with braces and lines end in semicolons.

Proponents of C-like languages argue that having a visible character like a brace at the beginning and end of a block makes it easier to tell when a block begins and ends. It also makes formatting more flexible because it means that you don't have to break your lines of code into visual lines-- you can put multiple lines of code on one line or put a whole block of lines on one line, stylistically, as long as your have the proper characters in place.

pseudocode

// you can do this if you want
if (condition) { statement(); anotherStatement(); aThirdStatement(); } else { }

// even though most code is formatted like this
if (condition) 
{ 
    statement(); 
    anotherStatement(); 
    aThirdStatement(); 
} 
else 
{ 
}

Examples of C-like languages include Java, C#, and, of course, C.

Whitespace delimited

Opponents of C-like syntax argue that the visible delimiting characters are redundant because most of the time the newlines and indentation are necessary for readability anyway. They argue that omitting visible delimiting characters makes the actual content of the code--the function calls--more visible, and therefore the code more readable overall.

Python is many people's first introduction to whitespace-delimited languages, but there are others. Many functional languages such as Elm and OCaml use whitespace to different degrees.

In conclusion

I hope that this article has been a good starting point. While I tried to include some example languages (that I know of) in each section, but I encourage you to use this guide as a starting point as you look for languages. Importantly, don't get overfocused on the "best" language to learn. Ultimately, if there's a language that is intuitive and enjoyable to you, that will be your best and most profitable language. Happy coding!