Why are new programming languages ​​introduced

Which programming languages ​​learn?

Personal development is often a big topic for many developers, especially at the beginning of a new year. Learning a new programming language is particularly suitable for this. Which languages ​​are available?

For many developers, last week was the first working week of the new year. For many people, a new year is often associated with good resolutions, especially in the area of ​​personal development. This topic was discussed in detail in December last year in the native web's Advent special. It is therefore advisable to use the recommendations made there as a basis for good intentions.

One of the recommendations was to learn new programming languages ​​on a regular basis, as this elegantly combines occupation with theory and practice. For obvious reasons, it is hardly possible to be equally proficient in numerous languages ​​- nevertheless, conceptual knowledge can be extracted very well from dealing with other languages, which can then be transferred to one's own everyday life.

Five languages ​​are suggested in this article, old and young, and by their very nature such a choice is always subjective. For each language there is an explanation of what speaks for it and why it was selected, but the list does not represent an objective truth. Nevertheless, enough worth knowing can be learned from studying these five languages ​​to make the effort worthwhile.


It starts with assembler, which is ultimately nothing more than machine language that has been brought into a human-readable form. Assembler does not correspond to the binary code, which consists only of zeros and ones, but to the so-called opcodes. These are the commands that a CPU knows and can process. Examples from x86 assembler would be, with which a value can be stored in a register, and with which two values ​​can be added.

Although assembler has hardly any practical relevance nowadays, studying this language provides an excellent basic understanding of the structure and functionality of computers, especially the CPU and how it works with memory. This knowledge is important because the modern high-level languages ​​ultimately only represent levels of abstraction above this layer.

However, every abstraction is brittle to a certain extent: Although compilers are very good at translating code from high-level languages ​​into optimized machine code, there are still situations in which the generated code runs unexpectedly slowly or sluggishly because of a special case in the code from the CPU can only be processed very laboriously. It is therefore helpful to understand what is going on under the hood, for example to better assess and optimize the performance and efficiency of code.


Unlike assembler, Go is a relatively young language. The first stable version was presented by Google in 2012. The idea behind Go was to develop a language like C and C ++, but without their historical legacy, a kind of "modern C". This goal is definitely noticeable in Go, because it is less suitable for application development than for system-related development. Go is therefore widely used, especially in the cloud area.

What is remarkable about Go is that great emphasis was placed on simplicity and clarity. For example, there is only one type of loop. Exceptions are completely dispensed with, instead functions return errors as part of the return value, which, compared to try-catch constructs, leads to more linear and often more readable and therefore more understandable code.

Go's type system is static, and Go also supports object-oriented programming. However, the type system is not nominally like that of C # and Java, but structurally like that of TypeScript. Interfaces do not have to be implemented explicitly, it is sufficient if a type has the appropriate external form. Go also works with garbage collection and knows pointers, but no pointer arithmetic.

The Go compiler creates statically linked binaries that do not require any additional runtime environment or libraries. It is noteworthy that the compiler can also generate binaries for other platforms than just the one on which it is operated.

A great feature of Go is the concept of asynchronous programming. Concurrency is implemented with so-called go routines that communicate with one another via so-called channels. A channel represents a kind of in-memory message queue which, in addition to communication, can also be used to synchronize go routines.

This topic, in conjunction with the option of being able to generate compact and efficient binaries for different platforms, is a good argument to deal with Go.


In contrast to Go, Haskell is a functional language, actually a purely functional language. Originally presented in 1990, there are now numerous different implementations, the most important of which is likely to be the Glasgow Haskell Compiler (GHC). The fact that Haskell is a purely functional language means that functions generally have no side effects and are "pure"; there are no imperative language constructs.

Aspects such as I / O that are dependent on side effects are implemented in Haskell via the type system. In connection with monads and the so-called "do notation", quasi-imperative code is made possible, which is nevertheless based on functional constructs under the hood. The type system is static, but supports type inference to a high degree, which is why developers rarely have to specify types by hand.

Since Haskell also knows type variables, functions can often be written flexibly and generically, but used in a type-specific manner. Due to the functional paradigm, one comes into contact with numerous other functional constructs, from higher-order functions to currying and algebraic data types to lazy evaluation, infinite generators and pattern matching. Therefore, Haskell is an excellent language for learning about functional constructs.

Many of these constructs cannot originally be mapped in other languages, but they nonetheless give new impulses and expand personal horizons.


Lisp is the second oldest programming language after Fortran by today's standards. Introduced in 1958 by John McCarthy, Lisp has lost none of its elegance to this day. Lisp is based on the lambda calculus and offers an extremely simple, but at the same time extremely flexible and powerful syntax. Accordingly, Lisp is also multi-paradigmatic, the language supports procedural as well as functional and object-oriented programming.

Many of the language constructs that seem natural today have their origins in Lisp. These include conditions, recursion, functions as data types, dynamic typing, garbage collection, a symbol data type and metaprogramming. A special feature is the so-called homoiconicity. This means that data and code in Lisp are the same - which makes it relatively easy to write programs that modify themselves or that write programs from scratch.

All these aspects show why one should concern oneself with Lisp: The American computer scientist Eric S. Raymond once said that Lisp is a language worth learning even if it is never used in practice, the gain in knowledge alone contributes to to become a better developer . There is hardly anything to add to that.

Anyone who wants to deal with Lisp will come across (as with Haskell) numerous different implementations, including Common Lisp, Schema and Clojure. The best-known great-grandson of Lisp, however, is JavaScript: The language has a lot of borrowings from Lisp, in fact the original goal for JavaScript was to develop a Lisp with C-like syntax for the web browser. Anyone who learns Lisp also learns a lot about JavaScript and along the way improves their understanding of this language.



Of all the languages ​​mentioned, Python is probably the most widespread. In the past few years, Python has become very popular, especially in the areas of artificial intelligence and machine learning. However, Python is not a new language. The origins go back to 1991, when Guido van Rossum presented the first version. Python's goals were simplicity and clarity, two factors that Python has retained to this day.

Python is also multiparadigmatic. The type system is dynamic, with at least optional static type annotations. What is striking is the extraordinarily large standard library and the availability of numerous libraries for scientific use, for example NumPy and SciPy. For this reason, many artificial intelligence and machine learning libraries have been developed for Python, especially TensorFlow and Keras. Therefore, Python has become the de facto language for this environment.

Also worth mentioning is the "Zen of Python", a collection of 19 aphorisms that describe the basic principles of development. They are especially easy to transfer to other languages ​​and, from a learning perspective, they are perhaps the most important factor that Python has to contribute to gaining knowledge.


Assembler, Go, Haskell, Lisp and Python - if you almost make the decision to deal with these languages ​​in the course of 2021, you have a good starting point to learn new things, which are also useful outside of the mentioned languages. It is particularly practical that the languages ​​mentioned also cover a certain range, from old to young, from system-related to artificial intelligence, from procedural to object-oriented to functional.

And as I said, the goal of all this is not to be able to work professionally in each of these languages ​​at the end of the year. Often it is the little insights between the lines that do a lot more overall than trying to dance at all weddings at the same time, which in the end doesn't work anyway. In this sense: Have fun and success!

Golo Roden

Golo Roden is the founder, CTO and managing director of the native web GmbH, a company specializing in native web technologies.

Read CV »