The Evolution of Programming Languages

Jan 24, 2023
5 min

Why do you use a programming language? The simple answer is to code. However, this answer does not provide insight into how programming languages work, nor which one you should use for a given task. Understanding the timeline of a technology is critical to using it effectively. Therefore, learning about the history of programming languages is important.

A programming language functions as an interface between humans and machines. In other words, humans use programming languages to tell man-made machines how to run. The alternative to using a programming language is not using software at all. Yet this is not productive for mathematical operations. Such that the computer was born with virtual programs coming not long after.

For a full timeline on the Computer History of Software and Languages, check out Computer History (Software & Languages).

It Starts With a Switch

How can you perform math in an automated manner? The first step involves electricity. The second step involves a switch: More so the idea of a light switch, which can be flipped on to represent a 1, and flipped off to represent a 0. In order to perform mathematical operations, you must store symbols such as 2 and 4 (as 2 + 2 = 4 ). So how can you represent a number such as 4 with a light switch?

A light switch.
A light switch.

Instead of requiring 10 light switches to represent a single value (i.e 4) in a base-10 numeral system, a single light switch — which can be on (1) or off (0) — is used to represent a single value in a base-2 (binary) number system. In other words, computers store numbers using 1s and 0s. In a base-10 system, the number 10 represents — from right-to-left — the sum of 0 (0 * 10 ^ 0 ) and 10 (1 * 10 ^ 1 ): The result is 10. In a base-2 system, the number 10 represents — from right-to-left — the sum of 0(0 * 2 ^ 0) and 2(1 * 2 ^ 1): The result is 2.

The prefix “bi” means “two”. Therefore, binary represents two numbers. In a computer, 1 and 0 are the two numbers used in a binary number system. Instead of “light switches”, micro transistors are used to switch electrical signals within the computer. Each micro transistor represents a 1 or 0 which represents a bit. However, a single bit on a computer doesn’t mean much without context.

For more information on reading and writing binary numbers, watch Why Do Computers Use 1s and 0s?


How can you represent words on a computer in binary? Numerous character set standards were created in order to represent words in binary (i.e ASCII, ANSI, EBCDIC, etc). The significance being that these character sets are human created standards which provide a specification for other machines to use. These standards specified how many bits were required for a character (i.e w in word). As a result, 8-bit computing became standardized in computer processing units. An 8-bit number — which contains 8 bits — is called a byte.

The introduction of 8-bit machines would evolve into 16-bit, 32-bit, and 64-bit machines. These machines would lead to the representation of numbers using octal and hexadecimal number systems. For more information on the history of computers, read Computer History (Computers).

Introducing The Compiler

Machine code represents the language of computers which use bits (1s and 0s). Of course, writing 1s and 0s to create programs is tedious and complex for humans. As a result, computer programs called compilers were created in order to convert human-readable code into machine code. Once compiled, a program is interpreted using an interpreter. As an example, the Computer Processing Unit (CPU) is the final interpreter of machine code.

From this point onwards, a pattern emerged: Programming languages were created in order to make it easier for humans to read and write code. Rather than compile to machine code, certain languages (i.e C) would compile to other languages (i.e Assembly) which compiles to machine code. This led to the separation of languages into levels such as high-level programming languages and low-level programming languages.

For more information on compilers, read about How Compilers Work.

The History of Programming Languages

The “History of Programming Languages” and “Timeline of Programming Languages” documents showcase various programming languages alongside their objectives, predecessors, and successors. Knowledge of these tools will assist you in creating more performant programs in a maintainable manner. With that being said, modern programming languages are typically broken down into specific categories to highlight their use cases.

Interpreted vs. Compiled

Understanding the meaning of compiled and interpreted (in computing) highlights the difference between compiled and interpreted programming languages. As a reminder, a compiler compiles code from one form (i.e human readable code) to another (i.e machine code). A compiled language implies that the language will NOT use an interpreter at runtime, which is typically beneficial for performance. In contrast, an interpreted language implies that the language will be interpreted at runtime, which is typically beneficial for code iteration (programmer productivity).

Unmanaged vs. Managed

Programming languages may be referred to by the way they handle computer memory. Processing bits on a Computer Processing Unit (CPU) is fast, but what if you need to store information (i.e variables)? Computer memory solves this problem at the cost of speed (in a similar manner to a Hard Drive). The difference being that Random Access Memory (RAM) is built for retrieval while Hard Drives (and similar technologies) are built for resilient storage.

As a reminder, RAM is erased when a computer shuts down.

Unexpected operations occur when a computer handles memory incorrectly. Such that certain languages do NOT require memory management in lieu of their own automated form of memory management (i.e Garbage Collectors). Managed memory programming languages function as an alternative to unmanaged memory programming languages which require you to manually manage memory of the computer.

Typed vs. Untyped

In order to compile languages, you must be able to ensure that the code runs. Instead of managing information (i.e numbers and words) as 1s and 0s, most programming languages use data types. For more information on data types, watch What Are Data Types? The difference between a typed and untyped language is whether its variables (i.e varin int var = 5) require types.

Paradigms

Programming paradigms provide mental models that assist programmers in solving programming problems. Certain programming languages may subscribe to programming paradigms such as Object Oriented Programming or Functional Programming. The importance of these paradigms are debatable. The significance of these paradigms is that they may influence how a typical program is created with a given programming language.

Read More

improve Your Mindset

Unsubscribe at any time. See Privacy Policy

link