The Evolution of Programming Languages

Jan 24, 2023
5 min

Why do you use a programming language? 

The simple answer is to program. However, this answer does not provide insight into how programming languages work nor which one you should use for a given task.

Understanding the timeline of a technology is critical to using the technology effectively. Therefore, learning about the history of programming languages is important.

A programming language functions as an interface between humans and machines. So, humans use programming languages to tell man-made machines how to run.

The alternative to using a programming language is not using software at all. You can always do math by hand. But this is tedious, so humans created computers to automate mathematical operations.

For a full timeline on the Computer History of Software and Languages, check out Computer History (Software & Languages).

It Starts With a Switch

How can you perform math in an automated manner?

The first step involves electricity.

The second step involves a switch: More so the idea of a light switch, which flips on to represent a 1, and flips off to represent a 0.

A light switch.
A light switch.

In order to perform mathematical operations, you must store symbols such as 2 and 4 (as 2 + 2 = 4 ).

But how can you represent a number such as 4 with a light switch?

Instead of requiring 10 light switches to represent a single value (e.g., 4) in a base-10 numeral system, a single light switch — which can be on (1) or off (0) — is used to represent a single value in a base-2 (binary) number system. So, computers store numbers using 1s and 0s.

Here is more information about number systems.

In a base-10 system, the number 10 represents — from right to left — the sum of 0 (0 * 10 ^ 0 ) and 10 (1 * 10 ^ 1 ): The result is 10

In a base-2 system, the number 10 represents — from right-to-left — the sum of 0(0 * 2 ^ 0) and 2(1 * 2 ^ 1): The result is 2.

The prefix “bi” means “two”. So, binary represents two numbers.

In a computer, 1 and 0 are the two numbers used in a binary number system. Instead of “light switches”, micro transistors are used to switch electrical signals within the computer. Each micro transistor represents a 1 or 0 which represents a bit. However, a single bit on a computer doesn’t mean much without context.

For more information on reading and writing binary numbers, watch Why Do Computers Use 1s and 0s?


How can you represent words on a computer in binary?

Numerous character set standards were created in order to represent words in binary (i.e ASCII, ANSI, EBCDIC, etc).

The significance of these character sets is that they are man-made standards that provide a specification for other machines. These standards specified how many bits were required for a character (e.g., w in word)8-bit computing became standardized in computer processing units as a result of popular character sets.

An 8-bit number — which contains 8 bits — is called a byte.

The introduction of 8-bit machines would evolve into 16-bit, 32-bit, and 64-bit machines. These machines would lead to the representation of numbers using octal and hexadecimal number systems.

For more information on the history of computers, read Computer History (Computers).

Introducing The Compiler

Machine code represents the language of computers that uses bits (1s and 0s)

Writing 1s and 0s to create programs is tedious and complex for humans. So, computer programs called compilers were created to convert human-readable code into machine code.

Once compiled, a program is interpreted using an interpreter. For example, the Computer Processing Unit (CPU) is the final interpreter of machine code.

From this point onwards, a pattern emerged: Programming languages were created to make it easier for humans to read and write code. 

Rather than compiling to machine code, specific languages (e.g., C++) would compile to other languages (e.g., Assembly), which compiles to machine code. This evolution led to the classification of languages into high-level programming languages and low-level programming languages.

For more information on compilers, read about How Compilers Work.

The History of Programming Languages

The “History of Programming Languages” and “Timeline of Programming Languages” documents showcase various programming languages alongside their objectives, predecessors, and successors. Knowledge of these tools will assist you in creating more performant programs in a maintainable manner. With that being said, modern programming languages are typically broken down into specific categories to highlight their use cases.

Interpreted vs. Compiled

Understanding the meaning of compiled and interpreted (in computing) highlights the difference between compiled and interpreted programming languages. As a reminder, a compiler compiles code from one form (i.e human readable code) to another (i.e machine code). A compiled language implies that the language will NOT use an interpreter at runtime, which is typically beneficial for performance. In contrast, an interpreted language implies that the language will be interpreted at runtime, which is typically beneficial for code iteration (programmer productivity).

Unmanaged vs. Managed

Programming languages may be referred to by the way they handle computer memory. Processing bits on a Computer Processing Unit (CPU) is fast, but what if you need to store information (i.e variables)? Computer memory and other computer storage options solve this problem at the cost of processing speed.

Random Access Memory (RAM) is built for high-speed access to a physical location of the computer called a memory address. Each memory address contains binary or decimal numbers which represent data or instructions. So a program is able to store and retrieve data from memory by writing and reading from a a memory address. However, RAM is erased when a computer shuts down.

Unexpected operations occur when a computer mishandles memory. So certain languages do NOT require the programmer to manage memory manually. Instead, an automated form of memory management — such as a Garbage Collector — is provided in the runtime. So a managed memory programming language provides the programmer with an automated form of memory management, while an unmanaged memory programming language requires the programmer to manually manage the computer’s memory.

Solid State DrivesHard Disk Drives, and other direct-access data storage solutions are built for high-capacity, long-term storage that persists data beyond the power state of a computer (on/off).

Typed vs. Untyped

In order to compile a program, the language must be able to ensure that the code will run correctly. It’s common for modern languages to use data types to check the correctness of a program. Data types serve as an alternative to managing information (i.e numbers and words) with 1s and 0s. For more information on data types, watch What Are Data Types?

A strongly typed language uses types that are defined explicitly (i.e var in int var = 5), while a weakly typed language infers them when a variable is assigned. A statically typed language performs type checks with the compiler, while a dynamically typed language performs type checks with an interpreter. A nominally typed language performs type checks using a type’s name, while a structurally typed language performs type checks using a type’s underlying structure.

An example of a structural type check is provided while comparing functions in the Go programming language: clap(int a) == comment(int b) since both functions are structurally defined as func(int).

Paradigms

Programming paradigms provide mental models that assist programmers in solving programming problems. Certain programming languages may subscribe to programming paradigms such as Object Oriented Programming or Functional Programming. The importance of these paradigms are debatable. The significance of these paradigms is that they may influence how a typical program is created with a given programming language.

Read More

link